20 research outputs found

    Cancer subtype identification pipeline: a classifusion approach

    Get PDF
    Classification of cancer patients into treatment groups is essential for appropriate diagnosis to increase survival. Previously, a series of papers, largely published in the breast cancer domain have leveraged Computational Intelligence (CI) developments and tools, resulting in ground breaking advances such as the classification of cancer into newly identified classes - leading to improved treatment options. However, the current literature on the use of CI to achieve this is fragmented, making further advances challenging. This paper captures developments in this area so far, with the goal to establish a clear, step-by-step pipeline for cancer subtype identification. Based on establishing the pipeline, the paper identifies key potential advances in CI at the individual steps, thus establishing a roadmap for future research. As such, it is the aim of the paper to engage the CI community to address the research challenges and leverage the strong potential of CI in this important area. Finally, we present a small set of recent findings on the Nottingham Tenovus Primary Breast Carcinoma Series enabling the classification of a higher number of patients into one of the identified breast cancer groups, and introduce Classifusion: a combination of results of multiple classifiers

    From approximative to descriptive fuzzy models

    Get PDF

    Fuzzy set covering as a new paradigm for the induction of fuzzy classification rules

    Full text link
    In 1965 Lofti A. Zadeh proposed fuzzy sets as a generalization of crisp (or classic) sets to address the incapability of crisp sets to model uncertainty and vagueness inherent in the real world. Initially, fuzzy sets did not receive a very warm welcome as many academics stood skeptical towards a theory of imprecise'' mathematics. In the middle to late 1980's the success of fuzzy controllers brought fuzzy sets into the limelight, and many applications using fuzzy sets started appearing. In the early 1970's the first machine learning algorithms started appearing. The AQ family of algorithms pioneered by Ryszard S. Michalski is a good example of the family of set covering algorithms. This class of learning algorithm induces concept descriptions by a greedy construction of rules that describe (or cover) positive training examples but not negative training examples. The learning process is iterative, and in each iteration one rule is induced and the positive examples covered by the rule removed from the set of positive training examples. Because positive instances are separated from negative instances, the term separate-and-conquer has been used to contrast the learning strategy against decision tree induction that use a divide-and-conquer learning strategy. This dissertation proposes fuzzy set covering as a powerful rule induction strategy. We survey existing fuzzy learning algorithms, and conclude that very few fuzzy learning algorithms follow a greedy rule construction strategy and no publications to date made the link between fuzzy sets and set covering explicit. We first develop the theoretical aspects of fuzzy set covering, and then apply these in proposing the first fuzzy learning algorithm that apply set covering and make explicit use of a partial order for fuzzy classification rule induction. We also investigate several strategies to improve upon the basic algorithm, such as better search heuristics and different rule evaluation metrics. We then continue by proposing a general unifying framework for fuzzy set covering algorithms. We demonstrate the benefits of the framework and propose several further fuzzy set covering algorithms that fit within the framework. We compare fuzzy and crisp rule induction, and provide arguments in favour of fuzzy set covering as a rule induction strategy. We also show that our learning algorithms outperform other fuzzy rule learners on real world data. We further explore the idea of simultaneous concept learning in the fuzzy case, and continue to propose the first fuzzy decision list induction algorithm. Finally, we propose a first strategy for encoding the rule sets generated by our fuzzy set covering algorithms inside an equivalent neural network

    Automated code compliance checking in the construction domain using semantic natural language processing and logic-based reasoning

    Get PDF
    Construction projects must comply with various regulations. The manual process of checking the compliance with regulations is costly, time consuming, and error prone. With the advancement in computing technology, there have been many research efforts in automating the compliance checking process, and many software development efforts led by industry bodies/associations, software companies, and/or government organizations to develop automated compliance checking (ACC) systems. However, two main gaps in the existing ACC efforts are: (1) manual effort is needed for extracting requirements from regulatory documents and encoding these requirements in a computer-processable rule format; and (2) there is a lack of a semantic representation for supporting automated compliance reasoning that is non-proprietary, non-hidden, and user-understandable and testable. To address these gaps, this thesis proposes a new ACC method that: (1) utilizes semantic natural language processing (NLP) techniques to automatically extract regulatory information from building codes and design information from building information models (BIMs); and (2) utilizes a semantic logic-based representation to represent and reason about the extracted regulatory information and design information for compliance checking. The proposed method is composed of four main methods/algorithms that are combined in one computational framework: (1) a semantic, rule-based method and algorithm that leverage NLP techniques to automatically extract regulatory information from building codes and represent the extracted information into semantic tuples, (2) a semantic, rule-based method and algorithm that leverage NLP techniques to automatically transform the extracted regulatory information into logic rules to prepare for automated reasoning, (3) a semantic, rule-based information extraction and information transformation method and algorithm to automatically extract design information from BIMs and transform the extracted information into logic facts to prepare for automated reasoning, and (4) a logic-based information representation and compliance reasoning schema to represent regulatory and design information for enabling the automated compliance reasoning process. To test the proposed method, a building information model test case was developed based on the Duplex Apartment Project from buildingSMARTalliance of the National Institute of Building Sciences. The test case was checked for compliance with a randomly selected chapter, Chapter 19, of the International Building Code 2009. Comparing to a manually developed gold standard, 87.6% precision and 98.7% recall in noncompliance detection were achieved, on the testing data

    Fifth Conference on Artificial Intelligence for Space Applications

    Get PDF
    The Fifth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: automation for Space Station; intelligent control, testing, and fault diagnosis; robotics and vision; planning and scheduling; simulation, modeling, and tutoring; development tools and automatic programming; knowledge representation and acquisition; and knowledge base/data base integration

    Mathematical analysis for tumor growth model of ordinary differential equations

    Get PDF
    Special functions occur quite frequently in mathematical analysis and lend itself rather frequently in physical and engineering applications. Among the special functions, gamma function seemed to be widely used. The purpose of this thesis is to analyse the various properties of gamma function and use these properties and its definition to derive and tackle some integration problem which occur quite frequently in applications. It should be noted that if elementary techniques such as substitution and integration by parts were used to tackle most of the integration problems, then we will end up with frustration. Due to this, importance of gamma function cannot be denied
    corecore