87 research outputs found

    Inferring Best Strategies from the Aggregation of Information from Multiple Agents: The Cultural Approach

    Get PDF
    Although learning in MAS is described as a collective experience, most of the times its modeling draws solely or mostly on the results of the interaction between the agents. This abruptly contrasts with our everyday experience where learning relies, to a great extent, on a large stock of already codified knowledge rather than on the direct interaction among the agents. If in the course human history this reliance on already codified knowledge had a significant importance, especially since the discovery of writing, during the last decade the size and availability of this stock has increased notably because of the Internet. Even more, humanity has endowed itself with institutions and organizations devoted to fulfill the role of codifying, preserving and diffusing knowledge since its early days. Cultural Algorithms are one of the few cases where the modeling of this process, although in a limited way, has been attempted. However, even in this case, the modeling lacks some of the characteristics that have made it so successful in human populations, notably its frugality in learning only from a rather small subset of the population and a discussion of its dynamics in terms of hypothesis generation and falsification and the relationship between adaptation and discovery. A deep understanding of this process of collective learning, in all its aspects of generalization and re-adoption of this collective and distilled knowledge, together with its diffusion is a key element to understand how human communities function and how a mixed community of humans and electronic agents could effectively learn. And this is more important now than ever because this process has become not only global and available to large populations but also has largely increased its speed. This research aims to contribute to cover this gap, elucidating on the frugality of the mechanism while mapping it in a framework characterized by a variable level of complexity of knowledge. Also seeks to understand the macro dynamics resulting from the micro mechanisms and strategies chosen by the agents. Nevertheless, as any exercise based on modeling, it portrays a stylized description of reality that misses important points and significant aspects of the real behavior. In this case, while we will focus on individual learning and on the process of generalization and ulterior re-use of these generalizations, learning from other agents is notably absent. We believe however, that this choice contributes to make our model easier to understand and easier to expose the causality relationships emerging from our simulation exercises without sacrificing any significant result

    Inferring Best Strategies from the Aggregation of Information from Multiple Agents: The Cultural Approach

    Get PDF
    Although learning in MAS is described as a collective experience, most of the times its modeling draws solely or mostly on the results of the interaction between the agents. This abruptly contrasts with our everyday experience where learning relies, to a great extent, on a large stock of already codified knowledge rather than on the direct interaction among the agents. If in the course human history this reliance on already codified knowledge had a significant importance, especially since the discovery of writing, during the last decade the size and availability of this stock has increased notably because of the Internet. Even more, humanity has endowed itself with institutions and organizations devoted to fulfill the role of codifying, preserving and diffusing knowledge since its early days. Cultural Algorithms are one of the few cases where the modeling of this process, although in a limited way, has been attempted. However, even in this case, the modeling lacks some of the characteristics that have made it so successful in human populations, notably its frugality in learning only from a rather small subset of the population and a discussion of its dynamics in terms of hypothesis generation and falsification and the relationship between adaptation and discovery. A deep understanding of this process of collective learning, in all its aspects of generalization and re-adoption of this collective and distilled knowledge, together with its diffusion is a key element to understand how human communities function and how a mixed community of humans and electronic agents could effectively learn. And this is more important now than ever because this process has become not only global and available to large populations but also has largely increased its speed. This research aims to contribute to cover this gap, elucidating on the frugality of the mechanism while mapping it in a framework characterized by a variable level of complexity of knowledge. Also seeks to understand the macro dynamics resulting from the micro mechanisms and strategies chosen by the agents. Nevertheless, as any exercise based on modeling, it portrays a stylized description of reality that misses important points and significant aspects of the real behavior. In this case, while we will focus on individual learning and on the process of generalization and ulterior re-use of these generalizations, learning from other agents is notably absent. We believe however, that this choice contributes to make our model easier to understand and easier to expose the causality relationships emerging from our simulation exercises without sacrificing any significant result

    Front Matter - Soft Computing for Data Mining Applications

    Get PDF
    Efficient tools and algorithms for knowledge discovery in large data sets have been devised during the recent years. These methods exploit the capability of computers to search huge amounts of data in a fast and effective manner. However, the data to be analyzed is imprecise and afflicted with uncertainty. In the case of heterogeneous data sources such as text, audio and video, the data might moreover be ambiguous and partly conflicting. Besides, patterns and relationships of interest are usually vague and approximate. Thus, in order to make the information mining process more robust or say, human-like methods for searching and learning it requires tolerance towards imprecision, uncertainty and exceptions. Thus, they have approximate reasoning capabilities and are capable of handling partial truth. Properties of the aforementioned kind are typical soft computing. Soft computing techniques like Genetic

    Advances in Evolutionary Algorithms

    Get PDF
    With the recent trends towards massive data sets and significant computational power, combined with evolutionary algorithmic advances evolutionary computation is becoming much more relevant to practice. Aim of the book is to present recent improvements, innovative ideas and concepts in a part of a huge EA field

    Mining a Small Medical Data Set by Integrating the Decision Tree and t-test

    Get PDF
    [[abstract]]Although several researchers have used statistical methods to prove that aspiration followed by the injection of 95% ethanol left in situ (retention) is an effective treatment for ovarian endometriomas, very few discuss the different conditions that could generate different recovery rates for the patients. Therefore, this study adopts the statistical method and decision tree techniques together to analyze the postoperative status of ovarian endometriosis patients under different conditions. Since our collected data set is small, containing only 212 records, we use all of these data as the training data. Therefore, instead of using a resultant tree to generate rules directly, we use the value of each node as a cut point to generate all possible rules from the tree first. Then, using t-test, we verify the rules to discover some useful description rules after all possible rules from the tree have been generated. Experimental results show that our approach can find some new interesting knowledge about recurrent ovarian endometriomas under different conditions.[[journaltype]]ćœ‹ć€–[[incitationindex]]EI[[booktype]]çŽ™æœŹ[[countrycodes]]FI

    Genetic Algorithms in Software Architecture Synthesis

    Get PDF
    Ohjelmistoarkkitehtuurien suunnittelu on kriittinen vaihe ohjelmistokehitystÀ, sillÀ arkkitehtuuri mÀÀrittelee ohjelmiston rungon: miten ohjelma jaetaan eri komponentteihin, ja miten komponentit ovat yhteydessÀ toisiinsa. Ohjelmisto voidaan yleensÀ toteuttaa toimivasti monella eri tavalla, mutta toimiva toteutus ei aina takaa, ettÀ ohjelmisto on myös toteutettu laadukkaasti. Laadun takeena onkin huolella ja taidolla suunniteltu arkkitehtuuri. Ohjelmistoarkkitehtuurin suunnittelu on haastavaa. Suunnitelmaa tehdessÀ tulee ottaa huomioon monen eri sidosryhmÀn (esim. kÀyttÀjÀ, toteuttaja, markkinoija) vaatimukset ja miettiÀ, miten mahdollisimman suuri osa vaatimuksista voidaan toteuttaa arkkitehtuurissa. Arkkitehtuurisuunnittelu vaatiikin kokeneen ohjelmistoarkkitehdin, joka on hankkinut tietotaitonsa vuosien ajalta eri ohjelmistoprojekteista. Kokemukseen perustuvan tiedon lisÀksi ohjelmistoarkkitehtuurisuunnittelun kÀytÀntöjÀ on koottu erÀÀnlaisiksi katalogeiksi, joissa esitellÀÀn hyvÀksi havaittuja ratkaisuja, ns. suunnittelutyylejÀ ja -malleja, yleisiin arkkitehtuurisuunnitteluongelmiin. Voidaankin ajatella, ettÀ arkkitehtuuri tuotetaan etsimÀllÀ (kokemukseen nojaten) paras mahdollinen kombinaatio suunnittelumalleja ja -tyylejÀ. Arkkitehtuurin suunnittelu onkin siis erÀÀnlainen optimointiongelma. Ohjelmistoista tulee jatkuvasti yhÀ monimutkaisempia. Sovelluksien monimutkaistuessa myös arkkitehtuurisuunnittelu muuttuu entistÀ vaikeammaksi ja vie yhÀ enemmÀn aikaa. Suunnittelun perustuminen hiljaiseen tietoon ja arkkitehtien kokemukseen tekee prosessista yhÀ hitaamman ja lÀpinÀkymÀttömÀmmÀn. Arkkitehtuurisuunnittelun automatisointi toisikin suuria sÀÀstöjÀ. Henkilöstövaihdosten yhteydessÀ ei myöskÀÀn tarvitsisi pelÀtÀ tietotaidon katoamista, kun arkkitehtuurisuunnittelu olisi helposti toistettavissa aina alusta lÀhtien. TÀssÀ vÀitöskirjassa on tutkittu, miten parhaan mahdollisen ratkaisun etsintÀprosessin (eli suunnittelumallien ja -tyylien soveltamisen) voisi automatisoida. Monimutkaisissa optimointiongelmissa kÀytetÀÀn etsintÀalgoritmeja, jotka haravoivat hakuavaruutta jollain satunnaistetulla menetelmÀllÀ. Yksi suosituimmista etsintÀalgoritmeista on geneettinen algoritmi. Geneettiset algoritmit tarkastelevat aina pientÀ ratkaisujoukkoa kerrallaan ja etsivÀt parasta ratkaisua yhdistelemÀllÀ osia löydetyistÀ ratkaisuista sekÀ muuntelemalla ratkaisuja. Jokaiselle ratkaisulle lasketaan laatuarvo, ja luonnonvalintaa jÀljitellen jatketaan parhaiden vaihtoehtojen tarkastelua sekÀ kehittelyÀ ja hylÀtÀÀn huonoimmat ratkaisut. EtsintÀalgoritmien kÀyttÀmistÀ ohjelmistokehityksen ongelmiin, esim. ohjelmistosuunnitteluun, testaukseen ja projektinhallintaan, kutsutaan etsintÀperustaiseksi ohjelmistokehitykseksi. VÀitöskirja kuuluu etsintÀperustaisen ohjelmistosuunnittelun alaan, ja siinÀ tutkitaan ns. ohjelmistoarkkitehtuurisynteesiÀ geneettisten algoritmien avulla. Ohjelmistoarkkitehtuurisynteesi lÀhtee ns. nolla-arkkitehtuurista , joka toteuttaa jÀrjestelmÀn toiminnalliset vaatimukset, mutta ei ota kantaa laatuvaatimuksiin. Laatua pyritÀÀn parantamaan lisÀÀmÀllÀ lÀhtöarkkitehtuuriin suunnittelutyylejÀ ja -malleja. VÀitöskirjassa laatuarviointiin on kÀytetty muunneltavuutta, tehokkuutta ja ymmÀrrettÀvyyttÀ. Lopputuloksena saadaan ehdotus arkkitehtuurista, joka toteuttaa toiminnalliset vaatimukset ja on myös laadukas. GeneettisiÀ algoritmeja ei ole aiemmin sovellettu vastaavantasoisiin suunnitteluongelmiin, joten toteutuksessa on kehitetty uusi tapa mallintaa arkkitehtuuri geneettiselle algoritmille sekÀ laskukaava arkkitehtuurin laadulle. Perustoteutuksen lisÀksi myös geneettisen algoritmin eri ominaisuuksia, ns. risteytysoperaatiota ja laatufunktiota on tutkittu tarkemmin, ja niille on kehitetty vaihtoehtoisia toteutuksia. Tapaustarkasteluista saadut tulokset osoittavat, ettÀ tÀllÀ hetkellÀ geneettisiin algoritmeihin perustuvaa arkkitehtuurisynteesi tuottaa suunnilleen samantasoisia ratkaisuja kuin kolmannen vuosikurssin ohjelmistotekniikan opiskelija.This thesis presents an approach for synthesizing software architectures with genetic algorithms. Previously in the literature, genetic algorithms have been mostly used to improve existing architectures. The method presented here, however, focuses on upstream design. The chosen genetic construction of software architectures is based on a model which contains information on functional requirements only. Architecture styles and design patterns are used to transform the initial high-level model to a more detailed design. Quality attributes, here modifiability, efficiency and complexity, are encoded in the algorithm s fitness function for evaluating the produced solutions. The final solution is given as a UML class diagram. While the main contribution is introducing the method for architecture synthesis, basic tool support for the implementation is also presented. Two case studies are used for evaluation. One case study uses the sketch for an electronic home control system, which is a typical embedded system. The other case study is based on a robot war game simulator, which is a typical framework system. Evaluation is mostly based on fitness graphs and (subjective) evaluation of produced class diagrams. In addition to the basic approach, variations and extensions regarding crossover and fitness function have been made. While the standard algorithm uses a random crossover, asexual reproduction and complementary crossover are also studied. Asexual crossover corresponds to real-life design situations, where two architectures are rarely combined. Complementary crossover, in turn, attempts to purposefully combine good parts of two architectures. The fitness function is extended with the option to include modifiability scenarios, which enables more targeted design decisions as critical parts of the architecture can be evaluated individually. In order to achieve a wider range of solutions that answer to competing quality demands, a multi-objective approach using Pareto optimality is given as an alternative for the single weighted fitness function. The multi-objective approach evaluates modifiability and efficiency, and gives as output the class diagrams of the whole Pareto front of the last generation. Thus, extremes for both quality attributes as well as solutions in the middle ground can be compared. An experimental study is also conducted where independent experts evaluate produced solutions for the electronic home control. Results show that genetic software architecture synthesis is indeed feasible, and the quality of solutions at this stage is roughly at the level of third year software engineering students

    Gaining Insight into Determinants of Physical Activity using Bayesian Network Learning

    Get PDF
    Contains fulltext : 228326pre.pdf (preprint version ) (Open Access) Contains fulltext : 228326pub.pdf (publisher's version ) (Open Access)BNAIC/BeneLearn 202

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp

    Digital control networks for virtual creatures

    Get PDF
    Robot control systems evolved with genetic algorithms traditionally take the form of floating-point neural network models. This thesis proposes that digital control systems, such as quantised neural networks and logical networks, may also be used for the task of robot control. The inspiration for this is the observation that the dynamics of discrete networks may contain cyclic attractors which generate rhythmic behaviour, and that rhythmic behaviour underlies the central pattern generators which drive lowlevel motor activity in the biological world. To investigate this a series of experiments were carried out in a simulated physically realistic 3D world. The performance of evolved controllers was evaluated on two well known control tasks—pole balancing, and locomotion of evolved morphologies. The performance of evolved digital controllers was compared to evolved floating-point neural networks. The results show that the digital implementations are competitive with floating-point designs on both of the benchmark problems. In addition, the first reported evolution from scratch of a biped walker is presented, demonstrating that when all parameters are left open to evolutionary optimisation complex behaviour can result from simple components
    • 

    corecore