92 research outputs found

    Fuzzy GMDH and its application to forecasting financial processes

    Get PDF
    This paper is devoted to the investigation and application of the fuzzy inductive modeling method known as Group Method of Data Handling (GMDH) in problems of Data Mining, in particularly its application to solving the forecasting tasks in financial sphere. The advantage of the inductive modeling method GMDH is a possibility of constructing an adequate model directly in the process of algorithm run. The generalization of GMDH in case of uncertainty — a new method fuzzy GMDH is described which enables to construct fuzzy models almost automatically. The algorithm of fuzzy GMDH is considered. Fuzzy GMDH with Gaussian and bell-wise membership functions MF are considered and their similarity with triangular MF is shown. Fuzzy GMDH with different partial descriptions orthogonal polynomials of Chebyshev and Fourier are considered. The problem of adaptation of fuzzy models obtained by FGMDH is considered and the corresponding adaptation algorithm is described. The extension and generalization of fuzzy GMDH in case of fuzzy inputs is considered and its properties are analyzed. The experimental investigations of the suggested FGMDH were carried out

    A Combined Method for Diabetes Mellitus Diagnosis Using Deep Learning, Singular Value Decomposition, and Self-Organizing Map Approaches

    Get PDF
    Diabetes in humans is a rapidly expanding chronic disease and a major crisis in modern societies. The classification of diabetics is a challenging and important procedure that allows the interpretation of diabetic data and diagnosis. Missing values in datasets can impact the prediction accuracy of the methods for the diagnosis. Due to this, a variety of machine learning techniques has been studied in the past. This research has developed a new method using machine learning techniques for diabetes risk prediction. The method was developed through the use of clustering and prediction learning techniques. The method uses Singular Value Decomposition for missing value predictions, a Self-Organizing Map for clustering the data, STEPDISC for feature selection, and an ensemble of Deep Belief Network classifiers for diabetes mellitus prediction. The performance of the proposed method is compared with the previous prediction methods developed by machine learning techniques. The results reveal that the deployed method can accurately predict diabetes mellitus for a set of real-world datasets

    Fuzzy Logic

    Get PDF
    The capability of Fuzzy Logic in the development of emerging technologies is introduced in this book. The book consists of sixteen chapters showing various applications in the field of Bioinformatics, Health, Security, Communications, Transportations, Financial Management, Energy and Environment Systems. This book is a major reference source for all those concerned with applied intelligent systems. The intended readers are researchers, engineers, medical practitioners, and graduate students interested in fuzzy logic systems

    Optimisation of Mobile Communication Networks - OMCO NET

    Get PDF
    The mini conference “Optimisation of Mobile Communication Networks” focuses on advanced methods for search and optimisation applied to wireless communication networks. It is sponsored by Research & Enterprise Fund Southampton Solent University. The conference strives to widen knowledge on advanced search methods capable of optimisation of wireless communications networks. The aim is to provide a forum for exchange of recent knowledge, new ideas and trends in this progressive and challenging area. The conference will popularise new successful approaches on resolving hard tasks such as minimisation of transmit power, cooperative and optimal routing

    Bioinformatics

    Get PDF
    This book is divided into different research areas relevant in Bioinformatics such as biological networks, next generation sequencing, high performance computing, molecular modeling, structural bioinformatics, molecular modeling and intelligent data analysis. Each book section introduces the basic concepts and then explains its application to problems of great relevance, so both novice and expert readers can benefit from the information and research works presented here

    Optimal Model-parameter Determination for Feedforward Artificial Neural Networks

    Get PDF
    Neural Networks are an immensely versatile tool for state-of-the-art prediction problems. However, they require a training process that involves numerous hyper-parameters. This creates a training process that demands expert knowledge to configure and is often described as a trial-and-error process. The result is a training process that needs to be executed multiple times and this is highly time expensive. Currently, one solution to this problem is to perform a Grid-Search algorithm. This is where a set of possible values (essentially guesses) is declared for each hyper-parameter. Then each combination of hyper-parameters is used to configure the training session. Once the training of each model (hyper-parameter combination) is completed, the best performing model is retained, and the rest are discarded. The problem with this is that it can be wasteful as it explores hyper-parameter combinations that predictably produce poor models. It is also very time consuming and scales poorly with the size of the model. A number of methods are proposed in this {thesis} to efficiently derive hyper-parameters and model parameters and the empirical results are presented. These methods are split into two categories, Weight-Direct Determination (WDD), and Simple Effective Evolutionary Method. The former category exhibits success in certain cases whereas the latter exhibits a broad success across Classification and Regression; amongst a large number of samples and features and small number of samples and features. The thesis concludes that the WDD is only effective on small datasets (both in terms of the number of samples and number of input features). This is due to its dependence on Delaunay Triangulation which exhibits a quadratic time complexity with-respect-to the number of input samples. It is deemed that the WDD methods developed in this research are not optimal for achieving general-purpose application of Multi-Layer Perceptrons. However, the Complete Simple Effective Evolutionary Method (CSEEM) from the SEEM Chapter shows great promise as it is able to perform effectively on the `Knowledge Extraction based on Evolutionary Learning' (KEEL) Datasets for both Regression and Classification. This method can achieve this effectiveness whilst only requiring a single hyper-parameter (the number of children in a population) that is fairly invariant across datasets. In this {thesis}, CSEEM is applied to real-world regression and classification problems. It is also compared to RMSProp (gradient-dependent iterative method) to compare its performance with an existing gradient-dependent method. In both categories, CSEEM consistently performs with a lower normalized square loss and higher classification accuracy, respectively, versus the number hidden nodes when compared to RMSProp

    Granular Support Vector Machines Based on Granular Computing, Soft Computing and Statistical Learning

    Get PDF
    With emergence of biomedical informatics, Web intelligence, and E-business, new challenges are coming for knowledge discovery and data mining modeling problems. In this dissertation work, a framework named Granular Support Vector Machines (GSVM) is proposed to systematically and formally combine statistical learning theory, granular computing theory and soft computing theory to address challenging predictive data modeling problems effectively and/or efficiently, with specific focus on binary classification problems. In general, GSVM works in 3 steps. Step 1 is granulation to build a sequence of information granules from the original dataset or from the original feature space. Step 2 is modeling Support Vector Machines (SVM) in some of these information granules when necessary. Finally, step 3 is aggregation to consolidate information in these granules at suitable abstract level. A good granulation method to find suitable granules is crucial for modeling a good GSVM. Under this framework, many different granulation algorithms including the GSVM-CMW (cumulative margin width) algorithm, the GSVM-AR (association rule mining) algorithm, a family of GSVM-RFE (recursive feature elimination) algorithms, the GSVM-DC (data cleaning) algorithm and the GSVM-RU (repetitive undersampling) algorithm are designed for binary classification problems with different characteristics. The empirical studies in biomedical domain and many other application domains demonstrate that the framework is promising. As a preliminary step, this dissertation work will be extended in the future to build a Granular Computing based Predictive Data Modeling framework (GrC-PDM) with which we can create hybrid adaptive intelligent data mining systems for high quality prediction

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    Transformation of graphical models to support knowledge transfer

    Get PDF
    Menschliche Experten verfügen über die Fähigkeit, ihr Entscheidungsverhalten flexibel auf die jeweilige Situation abzustimmen. Diese Fähigkeit zahlt sich insbesondere dann aus, wenn Entscheidungen unter beschränkten Ressourcen wie Zeitrestriktionen getroffen werden müssen. In solchen Situationen ist es besonders vorteilhaft, die Repräsentation des zugrunde liegenden Wissens anpassen und Entscheidungsmodelle auf unterschiedlichen Abstraktionsebenen verwenden zu können. Weiterhin zeichnen sich menschliche Experten durch die Fähigkeit aus, neben unsicheren Informationen auch unscharfe Wahrnehmungen in die Entscheidungsfindung einzubeziehen. Klassische entscheidungstheoretische Modelle basieren auf dem Konzept der Rationalität, wobei in jeder Situation die nutzenmaximale Entscheidung einer Entscheidungsfunktion zugeordnet wird. Neuere graphbasierte Modelle wie Bayes\u27sche Netze oder Entscheidungsnetze machen entscheidungstheoretische Methoden unter dem Aspekt der Modellbildung interessant. Als Hauptnachteil lässt sich die Komplexität nennen, wobei Inferenz in Entscheidungsnetzen NP-hart ist. Zielsetzung dieser Dissertation ist die Transformation entscheidungstheoretischer Modelle in Fuzzy-Regelbasen als Zielsprache. Fuzzy-Regelbasen lassen sich effizient auswerten, eignen sich zur Approximation nichtlinearer funktionaler Beziehungen und garantieren die Interpretierbarkeit des resultierenden Handlungsmodells. Die Übersetzung eines Entscheidungsmodells in eine Fuzzy-Regelbasis wird durch einen neuen Transformationsprozess unterstützt. Ein Agent kann zunächst ein Bayes\u27sches Netz durch Anwendung eines in dieser Arbeit neu vorgestellten parametrisierten Strukturlernalgorithmus generieren lassen. Anschließend lässt sich durch Anwendung von Präferenzlernverfahren und durch Präzisierung der Wahrscheinlichkeitsinformation ein entscheidungstheoretisches Modell erstellen. Ein Transformationsalgorithmus kompiliert daraus eine Regelbasis, wobei ein Approximationsmaß den erwarteten Nutzenverlust als Gütekriterium berechnet. Anhand eines Beispiels zur Zustandsüberwachung einer Rotationsspindel wird die Praxistauglichkeit des Konzeptes gezeigt.Human experts are able to flexible adjust their decision behaviour with regard to the respective situation. This capability pays in situations under limited resources like time restrictions. It is particularly advantageous to adapt the underlying knowledge representation and to make use of decision models at different levels of abstraction. Furthermore human experts have the ability to include uncertain information and vague perceptions in decision making. Classical decision-theoretic models are based directly on the concept of rationality, whereby the decision behaviour prescribed by the principle of maximum expected utility. For each observation some optimal decision function prescribes an action that maximizes expected utility. Modern graph-based methods like Bayesian networks or influence diagrams make use of modelling. One disadvantage of decision-theoretic methods concerns the issue of complexity. Finding an optimal decision might become very expensive. Inference in decision networks is known to be NP-hard. This dissertation aimed at combining the advantages of decision-theoretic models with rule-based systems by transforming a decision-theoretic model into a fuzzy rule-based system. Fuzzy rule bases are an efficient implementation from a computational point of view, they can approximate non-linear functional dependencies and they are also intelligible. There was a need for establishing a new transformation process to generate rule-based representations from decision models, which provide an efficient implementation architecture and represent knowledge in an explicit, intelligible way. At first, an agent can apply the new parameterized structure learning algorithm to identify the structure of the Bayesian network. The use of learning approaches to determine preferences and the specification of probability information subsequently enables to model decision and utility nodes and to generate a consolidated decision-theoretic model. Hence, a transformation process compiled a rule base by measuring the utility loss as approximation measure. The transformation process concept has been successfully applied to the problem of representing condition monitoring results for a rotation spindle
    corecore