228 research outputs found

    Comparative study of central decision makers versus groups of evolved agents trading in equity markets

    Get PDF
    This paper investigates the process of deriving a single decision solely based on the decisions made by a population of experts. Four different amalgamation processes are studied and compared among one another, collectively referred to as central decision makers. The expert, also referred to as reference, population is trained using a simple genetic algorithm using crossover, elitism and immigration using historical equity market data to make trading decisions. Performance of the trained agent population’s elite, as determined by results from testing in an out-of-sample data set, is also compared to that of the centralized decision makers to determine which displays the better performance. Performance was measured as the area under their total assets graph over the out-of-sample testing period to avoid biasing results to the cut off date using the more traditional measure of profit. Results showed that none of the implemented methods of deriving a centralized decision in this investigation outperformed the evolved and optimized agent population. Further, no difference in performance was found between the four central decision makersAgents, Decision Making, Equity Market Trading, Genetic Algorithms, Technical Indicators

    Integrative analysis of large-scale biological data sets

    Get PDF
    We present two novel web-applications for microarray and gene/protein set analysis, ArrayMining.net and TopoGSA. These bioinformatics tools use integrative analysis methods, including ensemble and consensus machine learning techniques, as well as modular combinations of different analysis types, to extract new biological insights from experimental transcriptomics and proteomics data. They enable researchers to combine related algorithms and datasets to increase the robustness and accuracy of statistical analyses and exploit synergies of different computational methods, ranging from statistical learning to optimization and topological network analysis

    An Idiotypic Immune Network as a Short Term Learning Architecture for Mobile Robots

    Get PDF
    A combined Short-Term Learning (STL) and Long-Term Learning (LTL) approach to solving mobile robot navigation problems is presented and tested in both real and simulated environments. The LTL consists of rapid simulations that use a Genetic Algorithm to derive diverse sets of behaviours. These sets are then transferred to an idiotypic Artificial Immune System (AIS), which forms the STL phase, and the system is said to be seeded. The combined LTL-STL approach is compared with using STL only, and with using a handdesigned controller. In addition, the STL phase is tested when the idiotypic mechanism is turned off. The results provide substantial evidence that the best option is the seeded idiotypic system, i.e. the architecture that merges LTL with an idiotypic AIS for the STL. They also show that structurally different environments can be used for the two phases without compromising transferabilityComment: 13 pages, 5 tables, 4 figures, 7th International Conference on Artificial Immune Systems (ICARIS2008), Phuket, Thailan

    A novel framework to elucidate core classes in a dataset

    Get PDF
    In this paper we present an original framework to extract representative groups from a dataset, and we validate it over a novel case study. The framework specifies the application of different clustering algorithms, then several statistical and visualisation techniques are used to characterise the results, and core classes are defined by consensus clustering. Classes may be verified using supervised classification algorithms to obtain a set of rules which may be useful for new data points in the future. This framework is validated over a novel set of histone markers for breast cancer patients. From a technical perspective, the resultant classes are well separated and characterised by low, medium and high levels of biological markers. Clinically, the groups appear to distinguish patients with poor overall survival from those with low grading score and better survival. Overall, this framework offers a promising methodology for elucidating core consensus groups from data

    A comparison between two types of Fuzzy TOPSIS method

    Get PDF
    Multi Criteria Decision Making methods have been developed to solve complex real-world decision problems. The Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) is currently one of the most popular methods and has been shown to provide helpful outputs in various application areas. In recent years, a variety of extensions, including fuzzy extensions of TOPSIS have been proposed. One challenge that has arisen is that it is not straightforward to differentiate between the multiple variants of TOPSIS existing today. Thus, in this paper, a comparison between the classical Fuzzy TOPSIS method proposed by Chen in 2000 and the recently Fuzzy TOPSIS proposed extension by Yuen in 2014 is made. The purpose of this comparative study is to show the difference between both methods and to provide context for their respective strengths and limitations both in complexity of application, and expressiveness of results. A detailed synthetic numeric example and comparison of both methods are provided

    Exploring subsethood to determine firing strength in non-singleton fuzzy logic systems

    Get PDF
    Real world environments face a wide range of sources of noise and uncertainty. Thus, the ability to handle various uncertainties, including noise, becomes an indispensable element of automated decision making. Non-Singleton Fuzzy Logic Systems (NSFLSs) have the potential to tackle uncertainty within the design of fuzzy systems. The firing strength has a significant role in the accuracy of FLSs, being based on the interaction of the input and antecedent fuzzy sets. Recent studies have shown that the standard technique for determining firing strengths risks substantial information loss in terms of the interaction of the input and antecedents. Recently, this issue has been addressed through exploration of alternative approaches which employ the centroid of the intersection (cen-NS) and the similarity (sim-NS) between input and antecedent fuzzy sets. This paper identifies potential shortcomings in respect to the previously introduced similarity-based NSFLSs in which firing strength is defined as the similarity between an input FS and an antecedent. To address these shortcomings, this paper explores the potential of the subsethood measure to generate a more suitable firing level (sub-NS) in NSFLSs featuring various noise levels. In the experiment, the basic waiter tipping fuzzy logic system is used to examine the behaviour of sub-NS in comparison with the current approaches. Analysis of the results shows that the sub-NS approach can lead to more stable behaviour in real world applications

    An exploration of issues and limitations in current methods of TOPSIS and fuzzy TOPSIS

    Get PDF
    Decision making is an important process for organizations. Common practice involves evaluation of prioritized alternatives based on a given set of criteria. These criteria conflict with each other and commonly no solution can satisfy all criteria simultaneously. This problem is known as Multi Criteria Decision Making (MCDM) or Multi Criteria Decision Analysis (MCDA) problem. One of the well-known techniques in MCDM is the ‘Technique for Order Preference by Similarity to Ideal Solution’ (TOPSIS) which was introduced by Hwang and Yoon in 1981 [1]. However, this technique uses crisp information which is impractical in many real world situations because decision makers usually express opinions in natural language such as Poor and Good. Information in the form of natural language, i.e. words, in turn is characterized by fuzziness and uncertainty (i.e. ‘what is the meaning of poor’). This uncertainty can be a challenge for decision makers. Zadeh [2] introduced the concept of fuzzy sets, which enables systematic reasoning with imprecise and fuzzy information by using fuzzy sets to represent linguistic terms numerically to then handle uncertain human judgement
    • …
    corecore