3,598 research outputs found

    Adaptive Cooperative Learning Methodology for Oil Spillage Pattern Clustering and Prediction

    Get PDF
    The serious environmental, economic and social consequences of oil spillages could devastate any nation of the world. Notable aftermath of this effect include loss of (or serious threat to) lives, huge financial losses, and colossal damage to the ecosystem. Hence, understanding the pattern and  making precise predictions in real time is required (as opposed to existing rough and discrete prediction) to give decision makers a more realistic picture of environment. This paper seeks to address this problem by exploiting oil spillage features with sets of collected data of oil spillage scenarios. The proposed system integrates three state-of-the-art tools: self organizing maps, (SOM), ensembles of deep neural network (k-DNN) and adaptive neuro-fuzzy inference system (ANFIS). It begins with unsupervised learning using SOM, where four natural clusters were discovered and used in making the data suitable for classification and prediction (supervised learning) by ensembles of k-DNN and ANFIS. Results obtained showed the significant classification and prediction improvements, which is largely attributed to the hybrid learning approach, ensemble learning and cognitive reasoning capabilities. However, optimization of k-DNN structure and weights would be needed for speed enhancement. The system would provide a means of understanding the nature, type and severity of oil spillages thereby facilitating a rapid response to impending oils spillages. Keywords: SOM, ANFIS, Fuzzy Logic, Neural Network, Oil Spillage, Ensemble Learnin

    Self learning neuro-fuzzy modeling using hybrid genetic probabilistic approach for engine air/fuel ratio prediction

    Get PDF
    Machine Learning is concerned in constructing models which can learn and make predictions based on data. Rule extraction from real world data that are usually tainted with noise, ambiguity, and uncertainty, automatically requires feature selection. Neuro-Fuzzy system (NFS) which is known with its prediction performance has the difficulty in determining the proper number of rules and the number of membership functions for each rule. An enhanced hybrid Genetic Algorithm based Fuzzy Bayesian classifier (GA-FBC) was proposed to help the NFS in the rule extraction. Feature selection was performed in the rule level overcoming the problems of the FBC which depends on the frequency of the features leading to ignore the patterns of small classes. As dealing with a real world problem such as the Air/Fuel Ratio (AFR) prediction, a multi-objective problem is adopted. The GA-FBC uses mutual information entropy, which considers the relevance between feature attributes and class attributes. A fitness function is proposed to deal with multi-objective problem without weight using a new composition method. The model was compared to other learning algorithms for NFS such as Fuzzy c-means (FCM) and grid partition algorithm. Predictive accuracy and the complexity of the Fuzzy Rule Base System (FRBS) including number of rules and number of terms in each rule were taken as terms of evaluation. It was also compared to the original GA-FBC depending on the frequency not on Mutual Information (MI). Experimental results using Air/Fuel Ratio (AFR) data sets show that the new model participates in decreasing the average number of attributes in the rule and sometimes in increasing the average performance compared to other models. This work facilitates in achieving a self-generating FRBS from real data. The GA-FBC can be used as a new direction in machine learning research. This research contributes in controlling automobile emissions in helping the reduction of one of the most causes of pollution to produce greener environment

    Input significance analysis: feature selection through synaptic weights manipulation for EFuNNs classifier

    Get PDF
    This work is interested in ISA methods that can manipulate synaptic weights namelyConnection Weights (CW) and Garson’s Algorithm (GA) and the classifier selected isEvolving Fuzzy Neural Networks (EFuNNs). Firstly, it test FS method on a dataset selectedfrom the UCI Machine Learning Repository and executed in an online environment, recordthe results and compared with the results that used original and ranked data from the previouswork. This is to identify whether FS can contribute to improved results and which of the ISAmethods mentioned above that work well with FS, i.e. give the best results. Secondly, to attestthe FS results by using a differently selected dataset taken from the same source and in thesame environment. The results are promising when FS is applied, some efficiency andaccuracy are noticeable compared to the original and ranked data.Keywords: feature selection; feature ranking; input significance analysis; evolvingconnectionist systems; evolving fuzzy neural network; connection weights; Garson’salgorithm

    Evolutionary Fuzzy Systems for Explainable Artificial Intelligence: Why, When, What for, and Where to?

    Get PDF
    Evolutionary fuzzy systems are one of the greatest advances within the area of computational intelligence. They consist of evolutionary algorithms applied to the design of fuzzy systems. Thanks to this hybridization, superb abilities are provided to fuzzy modeling in many different data science scenarios. This contribution is intended to comprise a position paper developing a comprehensive analysis of the evolutionary fuzzy systems research field. To this end, the "4 W" questions are posed and addressed with the aim of understanding the current context of this topic and its significance. Specifically, it will be pointed out why evolutionary fuzzy systems are important from an explainable point of view, when they began, what they are used for, and where the attention of researchers should be directed to in the near future in this area. They must play an important role for the emerging area of eXplainable Artificial Intelligence (XAI) learning from data

    An Optimized Type-2 Self-Organizing Fuzzy Logic Controller Applied in Anesthesia for Propofol Dosing to Regulate BIS

    Get PDF
    During general anesthesia, anesthesiologists who provide anesthetic dosage traditionally play a fundamental role to regulate Bispectral Index (BIS). However, in this paper, an optimized type-2 Self-Organizing Fuzzy Logic Controller (SOFLC) is designed for Target Controlled Infusion (TCI) pump related to propofol dosing guided by BIS, to realize automatic control of general anesthesia. The type-2 SOFLC combines a type-2 fuzzy logic controller with a self-organizing (SO) mechanism to facilitate online training while able to contend with operational uncertainties. A novel data driven Surrogate Model (SM) and Genetic Programming (GP) based strategy is introduced for optimizing the type-2 SOFLC parameters offline to handle inter-patient variability. A pharmacological model is built for simulation in which different optimization strategies are tested and compared. Simulation results are presented to demonstrate the applicability of our approach and show that the proposed optimization strategy can achieve better control performance in terms of steady state error and robustness

    Fair comparison of skin detection approaches on publicly available datasets

    Full text link
    Skin detection is the process of discriminating skin and non-skin regions in a digital image and it is widely used in several applications ranging from hand gesture analysis to track body parts and face detection. Skin detection is a challenging problem which has drawn extensive attention from the research community, nevertheless a fair comparison among approaches is very difficult due to the lack of a common benchmark and a unified testing protocol. In this work, we investigate the most recent researches in this field and we propose a fair comparison among approaches using several different datasets. The major contributions of this work are an exhaustive literature review of skin color detection approaches, a framework to evaluate and combine different skin detector approaches, whose source code is made freely available for future research, and an extensive experimental comparison among several recent methods which have also been used to define an ensemble that works well in many different problems. Experiments are carried out in 10 different datasets including more than 10000 labelled images: experimental results confirm that the best method here proposed obtains a very good performance with respect to other stand-alone approaches, without requiring ad hoc parameter tuning. A MATLAB version of the framework for testing and of the methods proposed in this paper will be freely available from https://github.com/LorisNann

    Brain image clustering by wavelet energy and CBSSO optimization algorithm

    Get PDF
    Previously, the diagnosis of brain abnormality was significantly important in the saving of social and hospital resources. Wavelet energy is known as an effective feature detection which has great efficiency in different utilities. This paper suggests a new method based on wavelet energy to automatically classify magnetic resonance imaging (MRI) brain images into two groups (normal and abnormal), utilizing support vector machine (SVM) classification based on chaotic binary shark smell optimization (CBSSO) to optimize the SVM weights. The results of the suggested CBSSO-based KSVM are compared favorably to several other methods in terms of better sensitivity and authenticity. The proposed CAD system can additionally be utilized to categorize the images with various pathological conditions, types, and illness modes

    Developing improved algorithms for detection and analysis of skin cancer

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Malignant melanoma is one of the deadliest forms of skin cancer and number of cases showed rapid increase in Europe, America, and Australia over the last few decades. Australia has one of the highest rates of skin cancer in the world, at nearly four times the rates in Canada, the US and the UK. Cancer treatment costs constitute more 7.2% of health system costs. However, a recovery rate of around 95% can be achieved if melanoma is detected at an early stage. Early diagnosis is obviously dependent upon accurate assessment by a medical practitioner. The variations of diagnosis are sufficiency large and there is a lack of detail of the test methods. This thesis investigates the methods for automated analysis of skin images to develop improved algorithms and to extend the functionality of the existing methods used in various stages of the automated diagnostic system. This in the long run can provide an alternative basis for researchers to experiment new and existing methodologies for skin cancer detection and diagnosis to help the medical practitioners. The objective is to have a detailed investigation for the requirements of automated skin cancer diagnostic systems, improve and develop relevant segmentation, feature selection and classification methods to deal with complex structures present in both dermoscopic/digital images and histopathological images. During the course of this thesis, several algorithms were developed. These algorithms were used in skin cancer diagnosis studies and some of them can also be applied in wider machine learning areas. The most important contributions of this thesis can be summarized as below: - Developing new segmentation algorithms designed specifically for skin cancer images including digital images of lesions and histopathalogical images with attention to their respective properties. The proposed algorithm uses a two-stage approach. Initially coarse segmentation of lesion area is done based on histogram analysis based orientation sensitive fuzzy C Mean clustering algorithm. The result of stage 1 is used for the initialization of a level set based algorithm developed for detecting finer differentiating details. The proposed algorithms achieved true detection rate of around 93% for external skin lesion images and around 88% for histopathological images. - Developing adaptive differential evolution based feature selection and parameter optimization algorithm. The proposed method is aimed to come up with an efficient approach to provide good accuracy for the skin cancer detection, while taking care of number of features and parameter tuning of feature selection and classification algorithm, as they all play important role in the overall analysis phase. The proposed method was also tested on 10 standard datasets for different kind of cancers and results shows improved performance for all the datasets compared to various state-of the art methods. - Proposing a parallelized knowledge based learning model which can make better use of the differentiating features along with increasing the generalization capability of the classification phase using advised support vector machine. Two classification algorithms were also developed for skin cancer data analysis, which can make use of both labelled and unlabelled data for training. First one is based on semi advised support vector machine. While the second one based on Deep Learning approach. The method of integrating the results of these two methods is also proposed. The experimental analysis showed very promising results for the appropriate diagnosis of melanoma. The classification accuracy achieved with the help of proposed algorithms was around 95% for external skin lesion classification and around 92 % for histopathalogical image analysis. Skin cancer dataset used in this thesis is obtained mainly from Sydney Melanoma Diagnostic Centre, Royal Prince Alfred Hospital. While for comparative analysis and benchmarking of the few algorithms some standard online cancer datasets were also used. Obtained result shows a good performance in segmentation and classification and can form the basis of more advanced computer aided diagnostic systems. While in future, the developed algorithms can also be extended for other kind of image analysis applications

    LSTM Networks for Detection and Classification of Anomalies in Raw Sensor Data

    Get PDF
    In order to ensure the validity of sensor data, it must be thoroughly analyzed for various types of anomalies. Traditional machine learning methods of anomaly detections in sensor data are based on domain-specific feature engineering. A typical approach is to use domain knowledge to analyze sensor data and manually create statistics-based features, which are then used to train the machine learning models to detect and classify the anomalies. Although this methodology is used in practice, it has a significant drawback due to the fact that feature extraction is usually labor intensive and requires considerable effort from domain experts. An alternative approach is to use deep learning algorithms. Research has shown that modern deep neural networks are very effective in automated extraction of abstract features from raw data in classification tasks. Long short-term memory networks, or LSTMs in short, are a special kind of recurrent neural networks that are capable of learning long-term dependencies. These networks have proved to be especially effective in the classification of raw time-series data in various domains. This dissertation systematically investigates the effectiveness of the LSTM model for anomaly detection and classification in raw time-series sensor data. As a proof of concept, this work used time-series data of sensors that measure blood glucose levels. A large number of time-series sequences was created based on a genuine medical diabetes dataset. Anomalous series were constructed by six methods that interspersed patterns of common anomaly types in the data. An LSTM network model was trained with k-fold cross-validation on both anomalous and valid series to classify raw time-series sequences into one of seven classes: non-anomalous, and classes corresponding to each of the six anomaly types. As a control, the accuracy of detection and classification of the LSTM was compared to that of four traditional machine learning classifiers: support vector machines, Random Forests, naive Bayes, and shallow neural networks. The performance of all the classifiers was evaluated based on nine metrics: precision, recall, and the F1-score, each measured in micro, macro and weighted perspective. While the traditional models were trained on vectors of features, derived from the raw data, that were based on knowledge of common sources of anomaly, the LSTM was trained on raw time-series data. Experimental results indicate that the performance of the LSTM was comparable to the best traditional classifiers by achieving 99% accuracy in all 9 metrics. The model requires no labor-intensive feature engineering, and the fine-tuning of its architecture and hyper-parameters can be made in a fully automated way. This study, therefore, finds LSTM networks an effective solution to anomaly detection and classification in sensor data
    • …
    corecore