811 research outputs found

    A comparative study of evolutionary approaches to the bi-objective dynamic Travelling Thief Problem

    Get PDF
    Dynamic evolutionary multi-objective optimization is a thriving research area. Recent contributions span the development of specialized algorithms and the construction of challenging benchmark problems. Here, we continue these research directions through the development and analysis of a new bi-objective problem, the dynamic Travelling Thief Problem (TTP), including three modes of dynamic change: city locations, item profit values, and item availability. The interconnected problem components embedded in the dynamic problem dictate that the effective tracking of good trade-off solutions that satisfy both objectives throughout dynamic events is non-trivial. Consequently, we examine the relative contribution to the non-dominated set from a variety of population seeding strategies, including exact solvers and greedy algorithms for the knapsack and tour components, and random techniques. We introduce this responsive seeding extension within an evolutionary algorithm framework. The efficacy of alternative seeding mechanisms is evaluated across a range of exemplary problem instances using ranking-based and quantitative statistical comparisons, which combines performance measurements taken throughout the optimization. Our detailed experiments show that the different dynamic TTP instances present varying difficulty to the seeding methods tested. We posit the dynamic TTP as a suitable benchmark capable of generating problem instances with different controllable characteristics aligning with many real-world problems

    Software Test Case Generation Tools and Techniques: A Review

    Get PDF
    Software Industry is evolving at a very fast pace since last two decades. Many software developments, testing and test case generation approaches have evolved in last two decades to deliver quality products and services. Testing plays a vital role to ensure the quality and reliability of software products. In this paper authors attempted to conduct a systematic study of testing tools and techniques. Six most popular e-resources called IEEE, Springer, Association for Computing Machinery (ACM), Elsevier, Wiley and Google Scholar to download 738 manuscripts out of which 125 were selected to conduct the study. Out of 125 manuscripts selected, a good number approx. 79% are from reputed journals and around 21% are from good conference of repute. Testing tools discussed in this paper have broadly been divided into five different categories: open source, academic and research, commercial, academic and open source, and commercial & open source. The paper also discusses several benchmarked datasets viz. Evosuite 10, SF100 Corpus, Defects4J repository, Neo4j, JSON, Mocha JS, and Node JS to name a few. Aim of this paper is to make the researchers aware of the various test case generation tools and techniques introduced in the last 11 years with their salient features

    Computational creativity: an interdisciplinary approach to sequential learning and creative generations

    Get PDF
    Creativity seems mysterious; when we experience a creative spark, it is difficult to explain how we got that idea, and we often recall notions like ``inspiration" and ``intuition" when we try to explain the phenomenon. The fact that we are clueless about how a creative idea manifests itself does not necessarily imply that a scientific explanation cannot exist. We are unaware of how we perform certain tasks, such as biking or language understanding, but we have more and more computational techniques that can replicate and hopefully explain such activities. We should understand that every creative act is a fruit of experience, society, and culture. Nothing comes from nothing. Novel ideas are never utterly new; they stem from representations that are already in mind. Creativity involves establishing new relations between pieces of information we had already: then, the greater the knowledge, the greater the possibility of finding uncommon connections, and the more the potential to be creative. In this vein, a beneficial approach to a better understanding of creativity must include computational or mechanistic accounts of such inner procedures and the formation of the knowledge that enables such connections. That is the aim of Computational Creativity: to develop computational systems for emulating and studying creativity. Hence, this dissertation focuses on these two related research areas: discussing computational mechanisms to generate creative artifacts and describing some implicit cognitive processes that can form the basis for creative thoughts

    A survey of Bayesian Network structure learning

    Get PDF

    An integrated computational and collaborative approach for city resilience planning

    Full text link
    Given the rise in climate change-related extreme events, there is an urgent need for cities and regions to implement resilience plans based on data and evidence and developed in collaboration with key stakeholders. However, current planning and decision-making processes rely on limited data and modelling. Moreover, stakeholder engagement is significantly inhibited by social, political, and technological barriers. The research presented in this thesis aims to enhance resilience planning practice through the development and evaluation of an integrated computational and collaborative scenario planning approach. The scenario planning approach is tested within a geodesign framework and supported by several planning support systems (PSS), including urban growth models. These PSS tools are made accessible to key stakeholders through dedicated planning support theatres, enabling participants to collaborate both in-person and online. Through two empirical case studies conducted in Australian regions, this research integrates data-driven modelling (computational) with people-led geodesign (collaborative) approaches for scenario forecasting and planning. The first case study explores anticipatory/normative scenarios, while the second focuses on exploratory scenario planning, with both aiming to enhance city and regional resilience. This thesis examines the roles played by both simple digital tools and purpose-built planning support theatres in scenario planning processes with key stakeholders. The research investigates the utility of data-driven models in supporting collaborative scenario planning. Both integration experiments received positive feedback from most participants. However, to truly improve the process, there is a need for widely available high-quality spatial and temporal datasets, including localised climate change impact data. In summary, an integrated computational and collaborative approach, augmented by data and technology, can provide an evidence base for decision-making towards a resilient future, fostering deeper engagement of the local community and across-government collaboration in scenario planning

    Parallel and Flow-Based High Quality Hypergraph Partitioning

    Get PDF
    Balanced hypergraph partitioning is a classic NP-hard optimization problem that is a fundamental tool in such diverse disciplines as VLSI circuit design, route planning, sharding distributed databases, optimizing communication volume in parallel computing, and accelerating the simulation of quantum circuits. Given a hypergraph and an integer kk, the task is to divide the vertices into kk disjoint blocks with bounded size, while minimizing an objective function on the hyperedges that span multiple blocks. In this dissertation we consider the most commonly used objective, the connectivity metric, where we aim to minimize the number of different blocks connected by each hyperedge. The most successful heuristic for balanced partitioning is the multilevel approach, which consists of three phases. In the coarsening phase, vertex clusters are contracted to obtain a sequence of structurally similar but successively smaller hypergraphs. Once sufficiently small, an initial partition is computed. Lastly, the contractions are successively undone in reverse order, and an iterative improvement algorithm is employed to refine the projected partition on each level. An important aspect in designing practical heuristics for optimization problems is the trade-off between solution quality and running time. The appropriate trade-off depends on the specific application, the size of the data sets, and the computational resources available to solve the problem. Existing algorithms are either slow, sequential and offer high solution quality, or are simple, fast, easy to parallelize, and offer low quality. While this trade-off cannot be avoided entirely, our goal is to close the gaps as much as possible. We achieve this by improving the state of the art in all non-trivial areas of the trade-off landscape with only a few techniques, but employed in two different ways. Furthermore, most research on parallelization has focused on distributed memory, which neglects the greater flexibility of shared-memory algorithms and the wide availability of commodity multi-core machines. In this thesis, we therefore design and revisit fundamental techniques for each phase of the multilevel approach, and develop highly efficient shared-memory parallel implementations thereof. We consider two iterative improvement algorithms, one based on the Fiduccia-Mattheyses (FM) heuristic, and one based on label propagation. For these, we propose a variety of techniques to improve the accuracy of gains when moving vertices in parallel, as well as low-level algorithmic improvements. For coarsening, we present a parallel variant of greedy agglomerative clustering with a novel method to resolve cluster join conflicts on-the-fly. Combined with a preprocessing phase for coarsening based on community detection, a portfolio of from-scratch partitioning algorithms, as well as recursive partitioning with work-stealing, we obtain our first parallel multilevel framework. It is the fastest partitioner known, and achieves medium-high quality, beating all parallel partitioners, and is close to the highest quality sequential partitioner. Our second contribution is a parallelization of an n-level approach, where only one vertex is contracted and uncontracted on each level. This extreme approach aims at high solution quality via very fine-grained, localized refinement, but seems inherently sequential. We devise an asynchronous n-level coarsening scheme based on a hierarchical decomposition of the contractions, as well as a batch-synchronous uncoarsening, and later fully asynchronous uncoarsening. In addition, we adapt our refinement algorithms, and also use the preprocessing and portfolio. This scheme is highly scalable, and achieves the same quality as the highest quality sequential partitioner (which is based on the same components), but is of course slower than our first framework due to fine-grained uncoarsening. The last ingredient for high quality is an iterative improvement algorithm based on maximum flows. In the sequential setting, we first improve an existing idea by solving incremental maximum flow problems, which leads to smaller cuts and is faster due to engineering efforts. Subsequently, we parallelize the maximum flow algorithm and schedule refinements in parallel. Beyond the strive for highest quality, we present a deterministically parallel partitioning framework. We develop deterministic versions of the preprocessing, coarsening, and label propagation refinement. Experimentally, we demonstrate that the penalties for determinism in terms of partition quality and running time are very small. All of our claims are validated through extensive experiments, comparing our algorithms with state-of-the-art solvers on large and diverse benchmark sets. To foster further research, we make our contributions available in our open-source framework Mt-KaHyPar. While it seems inevitable, that with ever increasing problem sizes, we must transition to distributed memory algorithms, the study of shared-memory techniques is not in vain. With the multilevel approach, even the inherently slow techniques have a role to play in fast systems, as they can be employed to boost quality on coarse levels at little expense. Similarly, techniques for shared-memory parallelism are important, both as soon as a coarse graph fits into memory, and as local building blocks in the distributed algorithm

    2023-2024 Lindenwood University Undergraduate Course Catalog

    Get PDF
    Lindenwood University Undergraduate Course Catalog.https://digitalcommons.lindenwood.edu/catalogs/1209/thumbnail.jp

    Fictional Practices of Spirituality I: Interactive Media

    Get PDF
    "Fictional Practices of Spirituality" provides critical insight into the implementation of belief, mysticism, religion, and spirituality into worlds of fiction, be it interactive or non-interactive. This first volume focuses on interactive, virtual worlds - may that be the digital realms of video games and VR applications or the imaginary spaces of life action role-playing and soul-searching practices. It features analyses of spirituality as gameplay facilitator, sacred spaces and architecture in video game geography, religion in video games and spiritual acts and their dramaturgic function in video games, tabletop, or LARP, among other topics. The contributors offer a first-time ever comprehensive overview of play-rites as spiritual incentives and playful spirituality in various medial incarnations

    Learning by Viewing: Generating Test Inputs for Games by Integrating Human Gameplay Traces in Neuroevolution

    Full text link
    Although automated test generation is common in many programming domains, games still challenge test generators due to their heavy randomisation and hard-to-reach program states. Neuroevolution combined with search-based software testing principles has been shown to be a promising approach for testing games, but the co-evolutionary search for optimal network topologies and weights involves unreasonably long search durations. In this paper, we aim to improve the evolutionary search for game input generators by integrating knowledge about human gameplay behaviour. To this end, we propose a novel way of systematically recording human gameplay traces, and integrating these traces into the evolutionary search for networks using traditional gradient descent as a mutation operator. Experiments conducted on eight diverse Scratch games demonstrate that the proposed approach reduces the required search time from five hours down to only 52 minutes

    Incorporating Surprisingly Popular Algorithm and Euclidean Distance-based Adaptive Topology into PSO

    Full text link
    While many Particle Swarm Optimization (PSO) algorithms only use fitness to assess the performance of particles, in this work, we adopt Surprisingly Popular Algorithm (SPA) as a complementary metric in addition to fitness. Consequently, particles that are not widely known also have the opportunity to be selected as the learning exemplars. In addition, we propose a Euclidean distance-based adaptive topology to cooperate with SPA, where each particle only connects to k number of particles with the shortest Euclidean distance during each iteration. We also introduce the adaptive topology into heterogeneous populations to better solve large-scale problems. Specifically, the exploration sub-population better preserves the diversity of the population while the exploitation sub-population achieves fast convergence. Therefore, large-scale problems can be solved in a collaborative manner to elevate the overall performance. To evaluate the performance of our method, we conduct extensive experiments on various optimization problems, including three benchmark suites and two real-world optimization problems. The results demonstrate that our Euclidean distance-based adaptive topology outperforms the other widely adopted topologies and further suggest that our method performs significantly better than state-of-the-art PSO variants on small, medium, and large-scale problems
    • …
    corecore