380 research outputs found
Recommended from our members
Computational intelligence techniques in asset risk analysis
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The problem of asset risk analysis is positioned within the computational intelligence paradigm. We suggest an algorithm for reformulating asset pricing, which involves incorporating imprecise information into the pricing factors through fuzzy variables as well as a calibration procedure for their possibility distributions. Then fuzzy mathematics is used to process the imprecise factors and obtain an asset evaluation. This evaluation is further automated using neural networks with sign restrictions on their weights. While such type of networks has been only used for up to two network inputs and hypothetical data, here we apply thirty-six inputs and empirical data. To achieve successful training, we modify the Levenberg-Marquart backpropagation algorithm. The intermediate result achieved is that the fuzzy asset evaluation inherits features of the factor imprecision and provides the basis for risk analysis. Next, we formulate a risk measure and a risk robustness measure based on the fuzzy asset evaluation under different characteristics of the pricing factors as well as different calibrations. Our database, extracted from DataStream, includes thirty-five companies traded on the London Stock Exchange. For each company, the risk and robustness measures are evaluated and an asset risk analysis is carried out through these values, indicating the implications they have on company performance. A comparative company risk analysis is also provided. Then, we employ both risk measures to formulate a two-step asset ranking method. The assets are initially rated according to the investors' risk preference. In addition, an algorithm is suggested to incorporate the asset robustness information and refine further the ranking benefiting market analysts. The rationale provided by the ranking technique serves as a point of departure in designing an asset risk classifier. We identify the fuzzy neural network structure of the classifier and develop an evolutionary training algorithm. The algorithm starts with suggesting preliminary heuristics in constructing a sufficient training set of assets with various characteristics revealed by the values of the pricing factors and the asset risk values. Then, the training algorithm works at two levels, the inner level targets weight optimization, while the outer level efficiently guides the exploration of the search space. The latter is achieved by automatically decomposing the training set into subsets of decreasing complexity and then incrementing backward the corresponding subpopulations of partially trained networks. The empirical results prove that the developed algorithm is capable of training the identified fuzzy network structure. This is a problem of such complexity that prevents single-level evolution from attaining meaningful results. The final outcome is an automatic asset classifier, based on the investors’ perceptions of acceptable risk. All the steps described above constitute our approach to reformulating asset risk analysis within the approximate reasoning framework through the fusion of various computational intelligence techniques
Evidential Evolving Gustafson-Kessel Algorithm For Online Data Streams Partitioning Using Belief Function Theory.
International audienceA new online clustering method called E2GK (Evidential Evolving Gustafson-Kessel) is introduced. This partitional clustering algorithm is based on the concept of credal partition defined in the theoretical framework of belief functions. A credal partition is derived online by applying an algorithm resulting from the adaptation of the Evolving Gustafson-Kessel (EGK) algorithm. Online partitioning of data streams is then possible with a meaningful interpretation of the data structure. A comparative study with the original online procedure shows that E2GK outperforms EGK on different entry data sets. To show the performance of E2GK, several experiments have been conducted on synthetic data sets as well as on data collected from a real application problem. A study of parameters' sensitivity is also carried out and solutions are proposed to limit complexity issues
An Intelligent Knowledge Management System from a Semantic Perspective
Knowledge Management Systems (KMS) are important tools by which organizations can better use information and, more importantly, manage knowledge. Unlike other strategies, knowledge management (KM) is difficult to define because it encompasses a range of concepts, management tasks, technologies, and organizational practices, all of which come under the umbrella of the information management. Semantic approaches allow easier and more efficient training, maintenance, and support knowledge. Current ICT markets are dominated by relational databases and document-centric information technologies, procedural algorithmic programming paradigms, and stack architecture. A key driver of global economic expansion in the coming decade is the build-out of broadband telecommunications and the deployment of intelligent services bundling. This paper introduces the main characteristics of an Intelligent Knowledge Management System as a multiagent system used in a Learning Control Problem (IKMSLCP), from a semantic perspective. We describe an intelligent KM framework, allowing the observer (a human agent) to learn from experience. This framework makes the system dynamic (flexible and adaptable) so it evolves, guaranteeing high levels of stability when performing his domain problem P. To capture by the agent who learn the control knowledge for solving a task-allocation problem, the control expert system uses at any time, an internal fuzzy knowledge model of the (business) process based on the last knowledge model.knowledge management, fuzzy control, semantic technologies, computational intelligence
Introduction to the 28th International Conference on Logic Programming Special Issue
We are proud to introduce this special issue of the Journal of Theory and
Practice of Logic Programming (TPLP), dedicated to the full papers accepted for
the 28th International Conference on Logic Programming (ICLP). The ICLP
meetings started in Marseille in 1982 and since then constitute the main venue
for presenting and discussing work in the area of logic programming
Evidential Evolving Gustafson-Kessel Algortithm (E2GK) and its application to PRONOSTIA's Data Streams Partitioning.
International audienceCondition-based maintenance (CBM) appears to be a key element in modern maintenance practice. Research in diagnosis and prognosis, two important aspects of a CBM program, is growing rapidly and many studies are conducted in research laboratories to develop models, algorithms and technologies for data processing. In this context, we present a new evolving clustering algorithm developed for prognostics perspectives. E2GK (Evidential Evolving Gustafson-Kessel) is an online clustering method in the theoretical framework of belief functions. The algorithm enables an online partitioning of data streams based on two existing and efficient algorithms: Evidantial c-Means (ECM) and Evolving Gustafson-Kessel (EGK). To validate and illustrate the results of E2GK, we use a dataset provided by an original platform called PRONOSTIA dedicated to prognostics applications
Evidential Evolving Gustafson-Kessel Algorithm (E2GK) and its application to PRONOSTIA's Data Streams Partitioning.
International audienceCondition-based maintenance (CBM) appears to be a key element in modern maintenance practice. Research in diagnosis and prognosis, two important aspects of a CBM program, is growing rapidly and many studies are conducted in research laboratories to develop models, algorithms and technologies for data processing. In this context, we present a new evolving clustering algorithm developed for prognostics perspectives. E2GK (Evidential Evolving Gustafson-Kessel) is an online clustering method in the theoretical framework of belief functions. The algorithm enables an online partitioning of data streams based on two existing and efficient algorithms: Evidantial c-Means (ECM) and Evolving Gustafson-Kessel (EGK). To validate and illustrate the results of E2GK, we use a dataset provided by an original platform called PRONOSTIA dedicated to prognostics applications
New methods for discovering local behaviour in mixed databases
Clustering techniques are widely used. There are many applications where it is desired to find automatically groups or hidden information in the data set. Finding a model of the system based in the integration of several local models is placed among other applications. Local model could have many structures; however, a linear structure is the most common one, due to its simplicity. This work aims at finding improvements in several fields, but all them will be applied to this finding of a set of local models in a database. On the one hand, a way of codifying the categorical information into numerical values has been designed, in order to apply a numerical algorithm to the whole data set. On the other hand, a cost index has been developed, which will be optimized globally, to find the parameters of the local clusters that best define the output of the process. Each of the techniques has been applied to several experiments and results show the improvements over the actual techniques.BarcelĂł Rico, F. (2009). New methods for discovering local behaviour in mixed databases. http://hdl.handle.net/10251/12739Archivo delegad
An Intelligent Knowledge Management System from a Semantic Perspective
Knowledge Management Systems (KMS) are important tools by whichorganizations can better use information and, more importantly, manageknowledge. Unlike other strategies, knowledge management (KM) is difficult todefine because it encompasses a range of concepts, management tasks,technologies, and organizational practices, all of which come under the umbrella ofthe information management. Semantic approaches allow easier and more efficienttraining, maintenance, and support knowledge. Current ICT markets are dominatedby relational databases and document-centric information technologies, proceduralalgorithmic programming paradigms, and stack architecture. A key driver of globaleconomic expansion in the coming decade is the build-out of broadbandtelecommunications and the deployment of intelligent services bundling. This paperintroduces the main characteristics of an Intelligent Knowledge ManagementSystem as a multiagent system used in a Learning Control Problem (IKMSLCP),from a semantic perspective. We describe an intelligent KM framework, allowingthe observer (a human agent) to learn from experience. This framework makes thesystem dynamic (flexible and adaptable) so it evolves, guaranteeing high levels ofstability when performing his domain problem P. To capture by the agent who learnthe control knowledge for solving a task-allocation problem, the control expertsystem uses at any time, an internal fuzzy knowledge model of the (business)process based on the last knowledge model
- …