3,191 research outputs found

    Fuzzy relational classifier trained by fuzzy clustering

    Full text link

    A survey on utilization of data mining approaches for dermatological (skin) diseases prediction

    Get PDF
    Due to recent technology advances, large volumes of medical data is obtained. These data contain valuable information. Therefore data mining techniques can be used to extract useful patterns. This paper is intended to introduce data mining and its various techniques and a survey of the available literature on medical data mining. We emphasize mainly on the application of data mining on skin diseases. A categorization has been provided based on the different data mining techniques. The utility of the various data mining methodologies is highlighted. Generally association mining is suitable for extracting rules. It has been used especially in cancer diagnosis. Classification is a robust method in medical mining. In this paper, we have summarized the different uses of classification in dermatology. It is one of the most important methods for diagnosis of erythemato-squamous diseases. There are different methods like Neural Networks, Genetic Algorithms and fuzzy classifiaction in this topic. Clustering is a useful method in medical images mining. The purpose of clustering techniques is to find a structure for the given data by finding similarities between data according to data characteristics. Clustering has some applications in dermatology. Besides introducing different mining methods, we have investigated some challenges which exist in mining skin data

    Advanced Signal Processing and Control in Anaesthesia

    Get PDF
    This thesis comprises three major stages: classification of depth of anaesthesia (DOA); modelling a typical patient’s behaviour during a surgical procedure; and control of DOAwith simultaneous administration of propofol and remifentanil. Clinical data gathered in theoperating theatre was used in this project. Multiresolution wavelet analysis was used to extract meaningful features from the auditory evoked potentials (AEP). These features were classified into different DOA levels using a fuzzy relational classifier (FRC). The FRC uses fuzzy clustering and fuzzy relational composition. The FRC had a good performance and was able to distinguish between the DOA levels. A hybrid patient model was developed for the induction and maintenance phase of anaesthesia. An adaptive network-based fuzzy inference system was used to adapt Takagi-Sugeno-Kang (TSK) fuzzy models relating systolic arterial pressure (SAP), heart rate (HR), and the wavelet extracted AEP features with the effect concentrations of propofol and remifentanil. The effect of surgical stimuli on SAP and HR, and the analgesic properties of remifentanil were described by Mamdani fuzzy models, constructed with anaesthetist cooperation. The model proved to be adequate, reflecting the effect of drugs and surgical stimuli. A multivariable fuzzy controller was developed for the simultaneous administration of propofol and remifentanil. The controller is based on linguistic rules that interact with three decision tables, one of which represents a fuzzy PI controller. The infusion rates of the two drugs are determined according to the DOA level and surgical stimulus. Remifentanil is titrated according to the required analgesia level and its synergistic interaction with propofol. The controller was able to adequately achieve and maintain the target DOA level, under different conditions. Overall, it was possible to model the interaction between propofol and remifentanil, and to successfully use this model to develop a closed-loop system in anaesthesia

    DCNFIS: Deep Convolutional Neuro-Fuzzy Inference System

    Full text link
    A key challenge in eXplainable Artificial Intelligence is the well-known tradeoff between the transparency of an algorithm (i.e., how easily a human can directly understand the algorithm, as opposed to receiving a post-hoc explanation), and its accuracy. We report on the design of a new deep network that achieves improved transparency without sacrificing accuracy. We design a deep convolutional neuro-fuzzy inference system (DCNFIS) by hybridizing fuzzy logic and deep learning models and show that DCNFIS performs as accurately as three existing convolutional neural networks on four well-known datasets. We furthermore that DCNFIS outperforms state-of-the-art deep fuzzy systems. We then exploit the transparency of fuzzy logic by deriving explanations, in the form of saliency maps, from the fuzzy rules encoded in DCNFIS. We investigate the properties of these explanations in greater depth using the Fashion-MNIST dataset

    Survey of data mining approaches to user modeling for adaptive hypermedia

    Get PDF
    The ability of an adaptive hypermedia system to create tailored environments depends mainly on the amount and accuracy of information stored in each user model. Some of the difficulties that user modeling faces are the amount of data available to create user models, the adequacy of the data, the noise within that data, and the necessity of capturing the imprecise nature of human behavior. Data mining and machine learning techniques have the ability to handle large amounts of data and to process uncertainty. These characteristics make these techniques suitable for automatic generation of user models that simulate human decision making. This paper surveys different data mining techniques that can be used to efficiently and accurately capture user behavior. The paper also presents guidelines that show which techniques may be used more efficiently according to the task implemented by the applicatio

    Ensemble learning method for hidden markov models.

    Get PDF
    For complex classification systems, data are gathered from various sources and potentially have different representations. Thus, data may have large intra-class variations. In fact, modeling each data class with a single model might lead to poor generalization. The classification error can be more severe for temporal data where each sample is represented by a sequence of observations. Thus, there is a need for building a classification system that takes into account the variations within each class in the data. This dissertation introduces an ensemble learning method for temporal data that uses a mixture of Hidden Markov Model (HMM) classifiers. We hypothesize that the data are generated by K models, each of which reacts a particular trend in the data. Model identification could be achieved through clustering in the feature space or in the parameters space. However, this approach is inappropriate in the context of sequential data. The proposed approach is based on clustering in the log-likelihood space, and has two main steps. First, one HMM is fit to each of the N individual sequences. For each fitted model, we evaluate the log-likelihood of each sequence. This will result in an N-by-N log-likelihood distance matrix that will be partitioned into K groups using a relational clustering algorithm. In the second step, we learn the parameters of one HMM per group. We propose using and optimizing various training approaches for the different K groups depending on their size and homogeneity. In particular, we investigate the maximum likelihood (ML), the minimum classification error (MCE) based discriminative, and the Variational Bayesian (VB) training approaches. Finally, to test a new sequence, its likelihood is computed in all the models and a final confidence value is assigned by combining the multiple models outputs using a decision level fusion method such as an artificial neural network or a hierarchical mixture of experts. Our approach was evaluated on two real-world applications: (1) identification of Cardio-Pulmonary Resuscitation (CPR) scenes in video simulating medical crises; and (2) landmine detection using Ground Penetrating Radar (GPR). Results on both applications show that the proposed method can identify meaningful and coherent HMM mixture components that describe different properties of the data. Each HMM mixture component models a group of data that share common attributes. The results indicate that the proposed method outperforms the baseline HMM that uses one model for each class in the data
    corecore