769 research outputs found

    A comprehensive survey on cultural algorithms

    Get PDF
    Peer reviewedPostprin

    Catgame: A Tool For Problem Solving In Complex Dynamic Systems Using Game Theoretic Knowledge Distribution In Cultural Algorithms, And Its Application (catneuro) To The Deep Learning Of Game Controller

    Get PDF
    Cultural Algorithms (CA) are knowledge-intensive, population-based stochastic optimization methods that are modeled after human cultures and are suited to solving problems in complex environments. The CA Belief Space stores knowledge harvested from prior generations and re-distributes it to future generations via a knowledge distribution (KD) mechanism. Each of the population individuals is then guided through the search space via the associated knowledge. Previously, CA implementations have used only competitive KD mechanisms that have performed well for problems embedded in static environments. Relatively recently, CA research has evolved to encompass dynamic problem environments. Given increasing environmental complexity, a natural question arises about whether KD mechanisms that also incorporate cooperation can perform better in such environments than purely competitive ones? Borrowing from game theory, game-based KD mechanisms are implemented and tested against the default competitive mechanism – Weighted Majority (WTD). Two different concepts of complexity are addressed – numerical optimization under dynamic environments and hierarchal, multi-objective optimization for evolving deep learning models. The former is addressed with the CATGame software system and the later with CATNeuro. CATGame implements three types of games that span both cooperation and competition for knowledge distribution, namely: Iterated Prisoner\u27s Dilemma (IPD), Stag-Hunt and Stackelberg. The performance of the three game mechanisms is compared with the aid of a dynamic problem generator called Cones World. Weighted Majority, aka “wisdom of the crowd”, the default CA competitive KD mechanism is used as the benchmark. It is shown that games that support both cooperation and competition do indeed perform better but not in all cases. The results shed light on what kinds of games are suited to problem solving in complex, dynamic environments. Specifically, games that balance exploration and exploitation using the local signal of ‘social’ rank – Stag-Hunt and IPD – perform better. Stag-Hunt which is also the most cooperative of the games tested, performed the best overall. Dynamic analysis of the ‘social’ aspects of the CA test runs shows that Stag-Hunt allocates compute resources more consistently than the others in response to environmental complexity changes. Stackelberg where the allocation decisions are centralized, like in a centrally planned economic system, is found to be the least adaptive. CATNeuro is for solving neural architecture search (NAS) problems. Contemporary ‘deep learning’ neural network models are proven effective. However, the network topologies may be complex and not immediately obvious for the problem at hand. This has given rise to the secondary field of neural architecture search. It is still nascent with many frameworks and approaches now becoming available. This paper describes a NAS method based on graph evolution pioneered by NEAT (Neuroevolution of Augmenting Topologies) but driven by the evolutionary mechanisms under Cultural Algorithms. Here CATNeuro is applied to find optimal network topologies to play a 2D fighting game called FightingICE (derived from “The Rumble Fish” video game). A policy-based, reinforcement learning method is used to create the training data for network optimization. CATNeuro is still evolving. To inform the development of CATNeuro, in this primary foray into NAS, we contrast the performance of CATNeuro with two different knowledge distribution mechanisms – the stalwart Weighted Majority and a new one based on the Stag-Hunt game from evolutionary game theory that performed the best in CATGame. The research shows that Stag-Hunt has a distinct edge over WTD in terms of game performance, model accuracy, and model size. It is therefore deemed to be the preferred mechanism for complex, hierarchical optimization tasks such as NAS and is planned to be used as the default KD mechanism in CATNeuro going forward

    Market Engineering

    Get PDF
    This open access book provides a broad range of insights on market engineering and information management. It covers topics like auctions, stock markets, electricity markets, the sharing economy, information and emotions in markets, smart decision-making in cities and other systems, and methodological approaches to conceptual modeling and taxonomy development. Overall, this book is a source of inspiration for everybody working on the vision of advancing the science of engineering markets and managing information for contributing to a bright, sustainable, digital world. Markets are powerful and extremely efficient mechanisms for coordinating individuals’ and organizations’ behavior in a complex, networked economy. Thus, designing, monitoring, and regulating markets is an essential task of today’s society. This task does not only derive from a purely economic point of view. Leveraging market forces can also help to tackle pressing social and environmental challenges. Moreover, markets process, generate, and reveal information. This information is a production factor and a valuable economic asset. In an increasingly digital world, it is more essential than ever to understand the life cycle of information from its creation and distribution to its use. Both markets and the flow of information should not arbitrarily emerge and develop based on individual, profit-driven actors. Instead, they should be engineered to serve best the whole society’s goals. This motivation drives the research fields of market engineering and information management. With this book, the editors and authors honor Professor Dr. Christof Weinhardt for his enormous and ongoing contribution to market engineering and information management research and practice. It was presented to him on the occasion of his sixtieth birthday in April 2021. Thank you very much, Christof, for so many years of cooperation, support, inspiration, and friendship

    An agent approach to improving radio frequency identification enabled Returnable Transport Equipment

    Get PDF
    Returnable transport equipment (RTE) such as pallets form an integral part of the supply chain and poor management leads to costly losses. Companies often address this matter by outsourcing the management of RTE to logistics service providers (LSPs). LSPs are faced with the task to provide logistical expertise to reduce RTE related waste, whilst differentiating their own services to remain competitive. In the current challenging economic climate, the role of the LSP to deliver innovative ways to achieve competitive advantage has never been so important. It is reported that radio frequency identification (RFID) application to RTE enables LSPs such as DHL to gain competitive advantage and offer clients improvements such as loss reduction, process efficiency improvement and effective security. However, the increased visibility and functionality of RFID enabled RTE requires further investigation in regards to decision‐making. The distributed nature of the RTE network favours a decentralised decision‐making format. Agents are an effective way to represent objects from the bottom‐up, capturing the behaviour and enabling localised decision‐making. Therefore, an agent based system is proposed to represent the RTE network and utilise the visibility and data gathered from RFID tags. Two types of agents are developed in order to represent the trucks and RTE, which have bespoke rules and algorithms in order to facilitate negotiations. The aim is to create schedules, which integrate RTE pick‐ups as the trucks go back to the depot. The findings assert that: - agent based modelling provides an autonomous tool, which is effective in modelling RFID enabled RTE in a decentralised utilising the real‐time data facility. ‐ the RFID enabled RTE model developed enables autonomous agent interaction, which leads to a feasible schedule integrating both forward and reverse flows for each RTE batch. ‐ the RTE agent scheduling algorithm developed promotes the utilisation of RTE by including an automatic return flow for each batch of RTE, whilst considering the fleet costs andutilisation rates. ‐ the research conducted contributes an agent based platform, which LSPs can use in order to assess the most appropriate strategies to implement for RTE network improvement for each of their clients

    On The Application Of Computational Modeling To Complex Food Systems Issues

    Get PDF
    Transdisciplinary food systems research aims to merge insights from multiple fields, often revealing confounding, complex interactions. Computational modeling offers a means to discover patterns and formulate novel solutions to such systems-level problems. The best models serve as hubs—or boundary objects—which ground and unify a collaborative, iterative, and transdisciplinary process of stakeholder engagement. This dissertation demonstrates the application of agent-based modeling, network analytics, and evolutionary computational optimization to the pressing food systems problem areas of livestock epidemiology and global food security. It is comprised of a methodological introduction, an executive summary, three journal-article formatted chapters, and an overarching discussion section. Chapter One employs an agent-based computer model (RUSH-PNBM v.1.1) developed to study the potential impact of the trend toward increased producer specialization on resilience to catastrophic epidemics within livestock production chains. In each run, an infection is introduced and may spread according to probabilities associated with the various modes of contact between hog producer, feed mill, and slaughter plant agents. Experimental data reveal that more-specialized systems are vulnerable to outbreaks at lower spatial densities, have more abrupt percolation transitions, and are characterized by less-predictable outcomes; suggesting that reworking network structures may represent a viable means to increase biosecurity. Chapter Two uses a calibrated, spatially-explicit version of RUSH-PNBM (v.1.2) to model the hog production chains within three U.S. states. Key metrics are calculated after each run, some of which pertain to overall network structures, while others describe each actor’s positionality within the network. A genetic programming algorithm is then employed to search for mathematical relationships between multiple individual indicators that effectively predict each node’s vulnerability. This “meta-metric” approach could be applied to aid livestock epidemiologists in the targeting of biosecurity interventions and may also be useful to study a wide range of complex network phenomena. Chapter Three focuses on food insecurity resulting from the projected gap between global food supply and demand over the coming decades. While no single solution has been identified, scholars suggest that investments into multiple interventions may stack together to solve the problem. However, formulating an effective plan of action requires knowledge about the level of change resulting from a given investment into each wedge, the time before that effect unfolds, the expected baseline change, and the maximum possible level of change. This chapter details an evolutionary-computational algorithm to optimize investment schedules according to the twin goals of maximizing global food security and minimizing cost. Future work will involve parameterizing the model through an expert informant advisory process to develop the existing framework into a practicable food policy decision-support tool

    Networks of change: extending Alaska-based communication networks to meet the challenges of the anthropocene

    Get PDF
    Thesis (Ph.D.) University of Alaska Fairbanks, 2017The Anthropocene is a contested term. As I conceptualize it throughout this dissertation, the Anthropocene is defined by an increased coupling of social and environmental systems at the global scale such that the by-products of human processes dominate the global stratigraphic record. Additionally, I connect the term to a worldview that sees this increased coupling as an existential threat to humanity's ability to sustain life on the planet. Awareness that the planet-wide scale of this coupling is fundamentally a new element in earth history is implicit in both understandings. How individuals and communities are impacted by this change varies greatly depending on a host of locally specific cross-scale factors. The range of scales (physical and social) that must be negotiated to manage these impacts places novel demands on the communication networks that shape human agency. Concern for how these demands are being met, and whose interests are being served in doing so, are the primary motivation for my research. My work is grounded in the communication-oriented theoretical traditions of media ecology and the more recent social-ecological system conceptualizations promoted in the study of resilience. I combine these ideas through a mixed methodology of digital ethnography and social network analysis to explore the communication dynamics of four Alaska-based social-ecological systems. The first two examples capture communication networks that formed in response to singular, rapid change environmental events (a coastal storm and river flood). The latter two map communication networks that have formed in response to more diffuse, slower acting environmental changes (a regional webinar series and an international arctic change conference). In each example, individuals or organizations enter and exit the mapped network(s) as they engage in the issue and specific communication channel being observed. Under these parameters a cyclic pattern of network expansion and contraction is identified. Expansion events are heavily influenced by established relationships retained during previous contraction periods. Many organizational outreach efforts are focused on triggering and participating in expansion events, however my observations highlight the role of legacy networks in system change. I suggest that for organizations interested in fostering sustainable socialecological relationships in the Anthropocene, strategic intervention may best be accomplished through careful consideration of how communicative relationships are maintained immediately following and in between expansion events. In the final sections of my dissertation I present a process template to support organizations interested in doing so. I include a complete set of learning activities to facilitate organizational use as well as examples of how the Alaska Native Knowledge Network is currently applying the process to meet their unique organizational needs

    Polycentric Information Commons: A Theory Development and Empirical Investigation

    Get PDF
    Decentralized systems online—such as open source software (OSS) development, online communities, wikis, and social media—often experience decline in participation which threatens their long-terms sustainability. Building on a rich body of research on the sustainability of physical resource systems, this dissertation presents a novel theoretical framing that addresses the sustainability issues arising in decentralized systems online and which are amplified because of their open nature. The first essay develops the theory of polycentric information commons (PIC) which conceptualizes decentralized systems online as “information commons”. The theory defines information commons, the stakeholders that participate in them, the sustainability indicators of information commons and the collective-action threats putting pressure on their long-term sustainability. Drawing on Ostrom’s factors associated with stable common pool resource systems, PIC theory specifies four polycentric governance practices that can help information commons reduce the magnitude and impact of collective-action threats while improving the information commons’ sustainability. The second essay further develops PIC theory by applying it in an empirical context of “digital activism”. Specifically, it examines the role of polycentric governance in reducing the threats to the legitimacy of digital activism—a type of information commons with an overarching objective of instigating societal change. As such, it illustrates the applicability of PIC theory in the study of digital activism. The third essay focuses on the threat of “information pollution” and its impact on open collaboration, a type of information commons dedicated to the creation of value through open participation online. It uncovers the way polycentric governance mechanism help reduce the duration of pollution events. This essay contributes to PIC theory by expanding it to the realm of operational governance in open collaboration
    • 

    corecore