28,264 research outputs found

    Relatedness Measures to Aid the Transfer of Building Blocks among Multiple Tasks

    Full text link
    Multitask Learning is a learning paradigm that deals with multiple different tasks in parallel and transfers knowledge among them. XOF, a Learning Classifier System using tree-based programs to encode building blocks (meta-features), constructs and collects features with rich discriminative information for classification tasks in an observed list. This paper seeks to facilitate the automation of feature transferring in between tasks by utilising the observed list. We hypothesise that the best discriminative features of a classification task carry its characteristics. Therefore, the relatedness between any two tasks can be estimated by comparing their most appropriate patterns. We propose a multiple-XOF system, called mXOF, that can dynamically adapt feature transfer among XOFs. This system utilises the observed list to estimate the task relatedness. This method enables the automation of transferring features. In terms of knowledge discovery, the resemblance estimation provides insightful relations among multiple data. We experimented mXOF on various scenarios, e.g. representative Hierarchical Boolean problems, classification of distinct classes in the UCI Zoo dataset, and unrelated tasks, to validate its abilities of automatic knowledge-transfer and estimating task relatedness. Results show that mXOF can estimate the relatedness reasonably between multiple tasks to aid the learning performance with the dynamic feature transferring.Comment: accepted by The Genetic and Evolutionary Computation Conference (GECCO 2020

    Robust and cost-effective approach for discovering action rules

    Get PDF
    The main goal of Knowledge Discovery in Databases is to find interesting and usable patterns, meaningful in their domain. Actionable Knowledge Discovery came to existence as a direct respond to the need of finding more usable patterns called actionable patterns. Traditional data mining and algorithms are often confined to deliver frequent patterns and come short for suggesting how to make these patterns actionable. In this scenario the users are expected to act. However, the users are not advised about what to do with delivered patterns in order to make them usable. In this paper, we present an automated approach to focus on not only creating rules but also making the discovered rules actionable. Up to now few works have been reported in this field which lacking incomprehensibility to the user, overlooking the cost and not providing rule generality. Here we attempt to present a method to resolving these issues. In this paper CEARDM method is proposed to discover cost-effective action rules from data. These rules offer some cost-effective changes to transferring low profitable instances to higher profitable ones. We also propose an idea for improving in CEARDM method

    Machine learning and its applications in reliability analysis systems

    Get PDF
    In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA

    TinkerCell: Modular CAD Tool for Synthetic Biology

    Get PDF
    Synthetic biology brings together concepts and techniques from engineering and biology. In this field, computer-aided design (CAD) is necessary in order to bridge the gap between computational modeling and biological data. An application named TinkerCell has been created in order to serve as a CAD tool for synthetic biology. TinkerCell is a visual modeling tool that supports a hierarchy of biological parts. Each part in this hierarchy consists of a set of attributes that define the part, such as sequence or rate constants. Models that are constructed using these parts can be analyzed using various C and Python programs that are hosted by TinkerCell via an extensive C and Python API. TinkerCell supports the notion of a module, which are networks with interfaces. Such modules can be connected to each other, forming larger modular networks. Because TinkerCell associates parameters and equations in a model with their respective part, parts can be loaded from databases along with their parameters and rate equations. The modular network design can be used to exchange modules as well as test the concept of modularity in biological systems. The flexible modeling framework along with the C and Python API allows TinkerCell to serve as a host to numerous third-party algorithms. TinkerCell is a free and open-source project under the Berkeley Software Distribution license. Downloads, documentation, and tutorials are available at www.tinkercell.com.Comment: 23 pages, 20 figure

    Structure Selection of Polynomial NARX Models using Two Dimensional (2D) Particle Swarms

    Full text link
    The present study applies a novel two-dimensional learning framework (2D-UPSO) based on particle swarms for structure selection of polynomial nonlinear auto-regressive with exogenous inputs (NARX) models. This learning approach explicitly incorporates the information about the cardinality (i.e., the number of terms) into the structure selection process. Initially, the effectiveness of the proposed approach was compared against the classical genetic algorithm (GA) based approach and it was demonstrated that the 2D-UPSO is superior. Further, since the performance of any meta-heuristic search algorithm is critically dependent on the choice of the fitness function, the efficacy of the proposed approach was investigated using two distinct information theoretic criteria such as Akaike and Bayesian information criterion. The robustness of this approach against various levels of measurement noise is also studied. Simulation results on various nonlinear systems demonstrate that the proposed algorithm could accurately determine the structure of the polynomial NARX model even under the influence of measurement noise
    • …
    corecore