3,595 research outputs found

    A novel framework for predicting patients at risk of readmission

    Get PDF
    Uncertainty in decision-making for patients’ risk of re-admission arises due to non-uniform data and lack of knowledge in health system variables. The knowledge of the impact of risk factors will provide clinicians better decision-making and in reducing the number of patients admitted to the hospital. Traditional approaches are not capable to account for the uncertain nature of risk of hospital re-admissions. More problems arise due to large amount of uncertain information. Patients can be at high, medium or low risk of re-admission, and these strata have ill-defined boundaries. We believe that our model that adapts fuzzy regression method will start a novel approach to handle uncertain data, uncertain relationships between health system variables and the risk of re-admission. Because of nature of ill-defined boundaries of risk bands, this approach does allow the clinicians to target individuals at boundaries. Targeting individuals at boundaries and providing them proper care may provide some ability to move patients from high risk to low risk band. In developing this algorithm, we aimed to help potential users to assess the patients for various risk score thresholds and avoid readmission of high risk patients with proper interventions. A model for predicting patients at high risk of re-admission will enable interventions to be targeted before costs have been incurred and health status have deteriorated. A risk score cut off level would flag patients and result in net savings where intervention costs are much higher per patient. Preventing hospital re-admissions is important for patients, and our algorithm may also impact hospital income

    Induction of accurate and interpretable fuzzy rules from preliminary crisp representation

    Get PDF
    This paper proposes a novel approach for building transparent knowledge-based systems by generating accurate and interpretable fuzzy rules. The learning mechanism reported here induces fuzzy rules via making use of only predefined fuzzy labels that reflect prescribed notations and domain expertise, thereby ensuring transparency in the knowledge model adopted for problem solving. It works by mapping every coarsely learned crisp production rule in the knowledge base onto a set of potentially useful fuzzy rules, which serves as an initial step towards an intuitive technique for similarity-based rule generalisation. This is followed by a procedure that locally selects a compact subset of the emerging fuzzy rules, so that the resulting subset collectively generalises the underlying original crisp rule. The outcome of this local procedure forms the input to a global genetic search process, which seeks for a trade-off between accuracy and complexity of the eventually induced fuzzy rule base while maintaining transparency. Systematic experimental results are provided to demonstrate that the induced fuzzy knowledge base is of high performance and interpretabilitypublishersversionPeer reviewe

    Data-driven fuzzy rule generation and its application for student academic performance evaluation

    Get PDF
    Several approaches using fuzzy techniques have been proposed to provide a practical method for evaluating student academic performance. However, these approaches are largely based on expert opinions and are difficult to explore and utilize valuable information embedded in collected data. This paper proposes a new method for evaluating student academic performance based on data-driven fuzzy rule induction. A suitable fuzzy inference mechanism and associated Rule Induction Algorithm is given. The new method has been applied to perform Criterion-Referenced Evaluation (CRE) and comparisons are made with typical existing methods, revealing significant advantages of the present work. The new method has also been applied to perform Norm-Referenced Evaluation (NRE), demonstrating its potential as an extended method of evaluation that can produce new and informative scores based on information gathered from data

    Sistemas granulares evolutivos

    Get PDF
    Orientador: Fernando Antonio Campos GomideTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Recentemente tem-se observado um crescente interesse em abordagens de modelagem computacional para lidar com fluxos de dados do mundo real. Métodos e algoritmos têm sido propostos para obtenção de conhecimento a partir de conjuntos de dados muito grandes e, a princípio, sem valor aparente. Este trabalho apresenta uma plataforma computacional para modelagem granular evolutiva de fluxos de dados incertos. Sistemas granulares evolutivos abrangem uma variedade de abordagens para modelagem on-line inspiradas na forma com que os humanos lidam com a complexidade. Esses sistemas exploram o fluxo de informação em ambiente dinâmico e extrai disso modelos que podem ser linguisticamente entendidos. Particularmente, a granulação da informação é uma técnica natural para dispensar atenção a detalhes desnecessários e enfatizar transparência, interpretabilidade e escalabilidade de sistemas de informação. Dados incertos (granulares) surgem a partir de percepções ou descrições imprecisas do valor de uma variável. De maneira geral, vários fatores podem afetar a escolha da representação dos dados tal que o objeto representativo reflita o significado do conceito que ele está sendo usado para representar. Neste trabalho são considerados dados numéricos, intervalares e fuzzy; e modelos intervalares, fuzzy e neuro-fuzzy. A aprendizagem de sistemas granulares é baseada em algoritmos incrementais que constroem a estrutura do modelo sem conhecimento anterior sobre o processo e adapta os parâmetros do modelo sempre que necessário. Este paradigma de aprendizagem é particularmente importante uma vez que ele evita a reconstrução e o retreinamento do modelo quando o ambiente muda. Exemplos de aplicação em classificação, aproximação de função, predição de séries temporais e controle usando dados sintéticos e reais ilustram a utilidade das abordagens de modelagem granular propostas. O comportamento de fluxos de dados não-estacionários com mudanças graduais e abruptas de regime é também analisado dentro do paradigma de computação granular evolutiva. Realçamos o papel da computação intervalar, fuzzy e neuro-fuzzy em processar dados incertos e prover soluções aproximadas de alta qualidade e sumário de regras de conjuntos de dados de entrada e saída. As abordagens e o paradigma introduzidos constituem uma extensão natural de sistemas inteligentes evolutivos para processamento de dados numéricos a sistemas granulares evolutivos para processamento de dados granularesAbstract: In recent years there has been increasing interest in computational modeling approaches to deal with real-world data streams. Methods and algorithms have been proposed to uncover meaningful knowledge from very large (often unbounded) data sets in principle with no apparent value. This thesis introduces a framework for evolving granular modeling of uncertain data streams. Evolving granular systems comprise an array of online modeling approaches inspired by the way in which humans deal with complexity. These systems explore the information flow in dynamic environments and derive from it models that can be linguistically understood. Particularly, information granulation is a natural technique to dispense unnecessary details and emphasize transparency, interpretability and scalability of information systems. Uncertain (granular) data arise from imprecise perception or description of the value of a variable. Broadly stated, various factors can affect one's choice of data representation such that the representing object conveys the meaning of the concept it is being used to represent. Of particular concern to this work are numerical, interval, and fuzzy types of granular data; and interval, fuzzy, and neurofuzzy modeling frameworks. Learning in evolving granular systems is based on incremental algorithms that build model structure from scratch on a per-sample basis and adapt model parameters whenever necessary. This learning paradigm is meaningful once it avoids redesigning and retraining models all along if the system changes. Application examples in classification, function approximation, time-series prediction and control using real and synthetic data illustrate the usefulness of the granular approaches and framework proposed. The behavior of nonstationary data streams with gradual and abrupt regime shifts is also analyzed in the realm of evolving granular computing. We shed light upon the role of interval, fuzzy, and neurofuzzy computing in processing uncertain data and providing high-quality approximate solutions and rule summary of input-output data sets. The approaches and framework introduced constitute a natural extension of evolving intelligent systems over numeric data streams to evolving granular systems over granular data streamsDoutoradoAutomaçãoDoutor em Engenharia Elétric

    From approximative to descriptive fuzzy models

    Get PDF

    The legacy of 50 years of fuzzy sets: A discussion

    Get PDF
    International audienceThis note provides a brief overview of the main ideas and notions underlying fifty years of research in fuzzy set and possibility theory, two important settings introduced by L.A. Zadeh for representing sets with unsharp boundaries and uncertainty induced by granules of information expressed with words. The discussion is organized on the basis of three potential understanding of the grades of membership to a fuzzy set, depending on what the fuzzy set intends to represent: a group of elements with borderline members, a plausibility distribution, or a preference profile. It also questions the motivations for some existing generalized fuzzy sets. This note clearly reflects the shared personal views of its authors

    Analyzing the impact of changing software requirements: a traceability-based methodology

    Get PDF
    Software undergoes change at all stages of the software development process. Changing requirements represent risks to the success and completion of a project. It is critical for project management to determine the impact of requirement changes in order to control the change process. We present a requirements traceability based impact analysis methodology to predictively evaluate requirement changes for software development projects. Trace-based Impact Analysis Methodology (TIAM) is a methodology utilizing the trace information, along with attributes of the work products and traces, to define a requirement change impact metric for determining the severity of a requirement change. We define the Work product Requirements trace Model (WoRM) to represent the information required for the methodology, where WoRM consists of the models Work product Information Model (WIM) for the software product and Requirement change Information Model (RIM) for requirement changes. TIAM produces a set of classes of requirement changes ordered from low to high impact. Requirement changes are placed into classes according their similarity. The similarity between requirement changes is based on a fuzzy compatibility relation between their respective requirement change impact metrics. TIAM also identifies potentially impacted work products by generating a set of potentially impacted work products for each requirement change. The experimental results show a favorable comparison between classes of requirement changes based on actual impact and the classes based on predicted impact

    Multistep Fuzzy Bridged Refinement Domain Adaptation Algorithm and Its Application to Bank Failure Prediction

    Full text link
    © 2015 IEEE. Machine learning plays an important role in data classification and data-based prediction. In some real-world applications, however, the training data (coming from the source domain) and test data (from the target domain) come from different domains or time periods, and this may result in the different distributions of some features. Moreover, the values of the features and/or labels of the datasets might be nonnumeric and involve vague values. Traditional learning-based prediction and classification methods cannot handle these two issues. In this study, we propose a multistep fuzzy bridged refinement domain adaptation algorithm, which offers an effective way to deal with both issues. It utilizes a concept of similarity to modify the labels of the target instances that were initially predicted by a shift-unaware model. It then refines the labels using instances that are most similar to a given target instance. These instances are extracted from mixture domains composed of source and target domains. The proposed algorithm is built on a basis of some data and refines the labels, thus performing completely independently of the shift-unaware prediction model. The algorithm uses a fuzzy set-based approach to deal with the vague values of the features and labels. Four different datasets are used in the experiments to validate the proposed algorithm. The results, which are compared with those generated by the existing domain adaptation methods, demonstrate a significant improvement in prediction accuracy in both the above-mentioned datasets
    corecore