114 research outputs found

    Meta-level learning for the effective reduction of model search space.

    Get PDF
    The exponential growth of volume, variety and velocity of the data is raising the need for investigation of intelligent ways to extract useful patterns from the data. It requires deep expert knowledge and extensive computational resources to find the mapping of learning methods that leads to the optimized performance on a given task. Moreover, numerous configurations of these learning algorithms add another level of complexity. Thus, it triggers the need for an intelligent recommendation engine that can advise the best learning algorithm and its configurations for a given task. The techniques that are commonly used by experts are; trial-and-error, use their prior experience on the specific domain, etc. These techniques sometimes work for less complex tasks that require thousands of parameters to learn. However, the state-of-the-art models, e.g. deep learning models, require well-tuned hyper-parameters to learn millions of parameters which demand specialized skills and numerous computationally expensive and time-consuming trials. In that scenario, Meta-level learning can be a potential solution that can recommend the most appropriate options efficiently and effectively regardless of the complexity of data. On the contrary, Meta-learning leads to several challenges; the most critical ones being model selection and hyper-parameter optimization. The goal of this research is to investigate model selection and hyper-parameter optimization approaches of automatic machine learning in general and the challenges associated with them. In machine learning pipeline there are several phases where Meta-learning can be used to effectively facilitate the best recommendations including 1) pre-processing steps, 2) learning algorithm or their combination, 3) adaptivity mechanism parameters, 4) recurring concept extraction, and 5) concept drift detection. The scope of this research is limited to feature engineering for problem representation, and learning strategy for algorithm and its hyper-parameters recommendation at Meta-level. There are three studies conducted around the two different approaches of automatic machine learning which are model selection using Meta-learning and hyper-parameter optimization. The first study evaluates the situation in which the use of additional data from a different domain can improve the performance of a meta-learning system for time-series forecasting, with focus on cross- domain Meta-knowledge transfer. Although the experiments revealed limited room for improvement over the overall best base-learner, the meta-learning approach turned out to be a safe choice, minimizing the risk of selecting the least appropriate base-learner. There are only 2% of cases recommended by meta- learning that are the worst performing base-learning methods. The second study proposes another efficient and accurate domain adaption approach but using a different meta-learning approach. This study empirically confirms the intuition that there exists a relationship between the similarity of the two different tasks and the depth of network needed to fine-tune in order to achieve accuracy com- parable with that of a model trained from scratch. However, the approach is limited to a single hyper-parameter which is fine-tuning of the network depth based on task similarity. The final study of this research has expanded the set of hyper-parameters while implicitly considering task similarity at the intrinsic dynamics of the training process. The study presents a framework to automatically find a good set of hyper-parameters resulting in reasonably good accuracy, by framing the hyper-parameter selection and tuning within the reinforcement learning regime. The effectiveness of a recommended tuple can be tested very quickly rather than waiting for the network to converge. This approach produces accuracy close to the state-of-the-art approach and is found to be comparatively 20% less computationally expensive than previous approaches. The proposed methods in these studies, belonging to different areas of automatic machine learning, have been thoroughly evaluated on a number of benchmark datasets which confirmed the great potential of these methods

    Tree models: a Bayesian perspective

    Get PDF
    Submitted in partial fulfilment of the requirements for the degree of Master of Philosophy at Queen Mary, University of London, November 2006Classical tree models represent an attempt to create nonparametric models which have good predictive powers as well a simple structure readily comprehensible by non- experts. Bayesian tree models have been created by a team consisting of Chipman, George and McCulloch and second team consisting of Denison, Mallick and Smith. Both approaches employ Green's Reversible Jump Markov Chain Monte Carlo tech- nique to carry out a more e®ective search than the `greedy' methods used classically. The aim of this work is to evaluate both types of Bayesian tree models from a Bayesian perspective and compare them

    Augmented reality in support of Industry 4.0—Implementation challenges and success factors

    Get PDF
    Industrial augmented reality (AR) is an integral part of Industry 4.0 concepts, as it enables workers to access digital information and overlay that information with the physical world. While not being broadly adopted in some applications, the compound annual growth rate of the industrial AR market is projected to grow rapidly. Hence, it is important to understand the issues arising from implementation of AR in industry. This study identifies critical success factors and challenges for industrial AR implementation projects, based on an industry survey. The broadly used technology, organisation, environment (TOE) framework is used as a theoretical basis for the quantitative part of the questionnaire. A complementary qualitative part is used to underpin and extend the findings. It is found that, while technological aspects are of importance, organisational issues are more relevant for industry, which has not been reflected to the same extent in literature.University of Cambridg

    Meningioma classification using an adaptive discriminant wavelet packet transform

    Get PDF
    Meningioma subtypes classification is a real world problem from the domain of histological image analysis that requires new methods for its resolution. Computerised histopathology presents a whole new set of problems and introduces new challenges in image classification. High intra-class variation and low inter-class differences in textures is often an issue in histological image analysis problems such as Meningioma subtypes classification. In this thesis, we present an adaptive wavelets based technique that adapts to the variation in the texture of meningioma samples and provides high classification accuracy results. The technique provides a mechanism for attaining an image representation consisting of various spatial frequency resolutions that represent the image and are referred to as subbands. Each subband provides different information pertaining to the texture in the image sample. Our novel method, the Adaptive Discriminant Wavelet Packet Transform (ADWPT), provides a means for selecting the most useful subbands and hence, achieves feature selection. It also provides a mechanism for ranking features based upon the discrimination power of a subband. The more discriminant a subband, the better it is for classification. The results show that high classification accuracies are obtained by selecting subbands with high discrimination power. Moreover, subbands that are more stable i.e. have a higher probability of being selected provide better classification accuracies. Stability and discrimination power have been shown to have a direct relationship with classification accuracy. Hence, ADWPT acquires a subset of subbands that provide a highly discriminant and robust set of features for Meningioma subtype classification. Classification accuracies obtained are greater than 90% for most Meningioma subtypes. Consequently, ADWPT is a robust and adaptive technique which enables it to overcome the issue of high intra-class variation by statistically selecting the most useful subbands for meningioma subtype classification. It overcomes the issue of low inter-class variation by adapting to texture samples and extracting the subbands that are best for differentiating between the various meningioma subtype textures

    Applying Machine Learning to Cyber Security

    Get PDF
    Intrusion Detection Systems (IDS) nowadays are a very important part of a system. In the last years many methods have been proposed to implement this kind of security measure against cyber attacks, including Machine Learning and Data Mining based. In this work we discuss in details the family of anomaly based IDSs, which are able to detect never seen attacks, paying particular attention to adherence to the FAIR principles. This principles include the Accessibility and the Reusability of software. Moreover, as the purpose of this work is the assessment of what is going on in the state of the art we have selected three approaches, according to their reproducibility and we have compared their performances with a common experimental setting. Lastly real world use case has been analyzed, resulting in the proposal of an usupervised ML model for pre-processing and analyzing web server logs. The proposed solution uses clustering and outlier detection techniques to detect attacks in an unsupervised way

    Investigation of artificial immune systems and variable selection techniques for credit scoring

    Get PDF
    Most lending institutions are aware of the importance of having a well-performing credit scoring model or scorecard and know that, in order to remain competitive in the credit industry, it is necessary to continuously improve their scorecards. This is because better scorecards result in substantial monetary savings that can be stated in terms of millions of dollars. Thus, there has been increasing interest in the application of new classifiers in credit scoring from both practitioners and researchers in the last few decades. Most of the recent work in this field has focused on the use of new and innovative techniques to classify applicants as either 'credit-worthy' or 'non-credit-worthy', with the aim of improving scorecard performance. In this thesis, we investigate the suitability of intelligent systems techniques for credit scoring. In particular, intelligent systems that use immunological metaphors are examined and used to build a learning and evolutionary classification algorithm. Our model, named Simple Artificial Immune System (SAIS), is based on the concepts of the natural immune system. The model uses applicants' credit details to classify them as either 'credit-worthy' or 'non-credit-worthy'. As part of the model development, we also investigate several techniques for selecting variables from the applicants' credit details. Variable selection is important as choosing the best set of variables can have a significant effect on the performance of scorecards. Interestingly, our results demonstrate that the traditional stepwise regression variable selection technique seems to perform better than many of the more recent techniques. A further contribution offered by this thesis is a detailed description of the scorecard development process. A detailed explanation of this process is not readily available in the literature and our description of the process is based on our own experiences and discussions with industry credit risk practitioners. We evaluate our model using both publicly available datasets as well as a very large set of real-world consumer credit scoring data obtained from a leading Australian bank. The evaluation results reveal that SAIS is a competitive classifier and is appropriate for developing scorecards which require a class decision as an outcome. Another conclusion reached is one confirmed by the existing literature, that even though more sophisticated scorecard development techniques, including SAIS, perform well compared to the traditional statistical methods, their performances are not statistically significantly different from the statistical methods. As with other intelligent systems techniques, SAIS is not explicitly designed to develop practical scorecards which require the generation of a score that represents the degree of confidence that an applicant will belong to a particular group. However, it is comparable to other intelligent systems techniques which are outperformed by statistical techniques for generating p ractical scorecards. Our final remark on this research is that even though SAIS does not seem to be quite suitable for developing practical scorecards, we still believe that there is room for improvement and that the natural immune system of the body has a number of avenues yet to be explored which could assist with the development of practical scorecards

    On the Combination of Game-Theoretic Learning and Multi Model Adaptive Filters

    Get PDF
    This paper casts coordination of a team of robots within the framework of game theoretic learning algorithms. In particular a novel variant of fictitious play is proposed, by considering multi-model adaptive filters as a method to estimate other players’ strategies. The proposed algorithm can be used as a coordination mechanism between players when they should take decisions under uncertainty. Each player chooses an action after taking into account the actions of the other players and also the uncertainty. Uncertainty can occur either in terms of noisy observations or various types of other players. In addition, in contrast to other game-theoretic and heuristic algorithms for distributed optimisation, it is not necessary to find the optimal parameters a priori. Various parameter values can be used initially as inputs to different models. Therefore, the resulting decisions will be aggregate results of all the parameter values. Simulations are used to test the performance of the proposed methodology against other game-theoretic learning algorithms.</p

    Machine learning techniques for high dimensional data

    Get PDF
    This thesis presents data processing techniques for three different but related application areas: embedding learning for classification, fusion of low bit depth images and 3D reconstruction from 2D images. For embedding learning for classification, a novel manifold embedding method is proposed for the automated processing of large, varied data sets. The method is based on binary classification, where the embeddings are constructed so as to determine one or more unique features for each class individually from a given dataset. The proposed method is applied to examples of multiclass classification that are relevant for large scale data processing for surveillance (e.g. face recognition), where the aim is to augment decision making by reducing extremely large sets of data to a manageable level before displaying the selected subset of data to a human operator. In addition, an indicator for a weighted pairwise constraint is proposed to balance the contributions from different classes to the final optimisation, in order to better control the relative positions between the important data samples from either the same class (intraclass) or different classes (interclass). The effectiveness of the proposed method is evaluated through comparison with seven existing techniques for embedding learning, using four established databases of faces, consisting of various poses, lighting conditions and facial expressions, as well as two standard text datasets. The proposed method performs better than these existing techniques, especially for cases with small sets of training data samples. For fusion of low bit depth images, using low bit depth images instead of full images offers a number of advantages for aerial imaging with UAVs, where there is a limited transmission rate/bandwidth. For example, reducing the need for data transmission, removing superfluous details, and reducing computational loading of on-board platforms (especially for small or micro-scale UAVs). The main drawback of using low bit depth imagery is discarding image details of the scene. Fortunately, this can be reconstructed by fusing a sequence of related low bit depth images, which have been properly aligned. To reduce computational complexity and obtain a less distorted result, a similarity transformation is used to approximate the geometric alignment between two images of the same scene. The transformation is estimated using a phase correlation technique. It is shown that that the phase correlation method is capable of registering low bit depth images, without any modi�cation, or any pre and/or post-processing. For 3D reconstruction from 2D images, a method is proposed to deal with the dense reconstruction after a sparse reconstruction (i.e. a sparse 3D point cloud) has been created employing the structure from motion technique. Instead of generating a dense 3D point cloud, this proposed method forms a triangle by three points in the sparse point cloud, and then maps the corresponding components in the 2D images back to the point cloud. Compared to the existing methods that use a similar approach, this method reduces the computational cost. Instated of utilising every triangle in the 3D space to do the mapping from 2D to 3D, it uses a large triangle to replace a number of small triangles for flat and almost flat areas. Compared to the reconstruction result obtained by existing techniques that aim to generate a dense point cloud, the proposed method can achieve a better result while the computational cost is comparable

    Advances in Image Processing, Analysis and Recognition Technology

    Get PDF
    For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches
    • …
    corecore