595 research outputs found

    Agents for educational games and simulations

    Get PDF
    This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications

    Survey on Combinatorial Register Allocation and Instruction Scheduling

    Full text link
    Register allocation (mapping variables to processor registers or memory) and instruction scheduling (reordering instructions to increase instruction-level parallelism) are essential tasks for generating efficient assembly code in a compiler. In the last three decades, combinatorial optimization has emerged as an alternative to traditional, heuristic algorithms for these two tasks. Combinatorial optimization approaches can deliver optimal solutions according to a model, can precisely capture trade-offs between conflicting decisions, and are more flexible at the expense of increased compilation time. This paper provides an exhaustive literature review and a classification of combinatorial optimization approaches to register allocation and instruction scheduling, with a focus on the techniques that are most applied in this context: integer programming, constraint programming, partitioned Boolean quadratic programming, and enumeration. Researchers in compilers and combinatorial optimization can benefit from identifying developments, trends, and challenges in the area; compiler practitioners may discern opportunities and grasp the potential benefit of applying combinatorial optimization

    Energy Efficient Load Latency Tolerance: Single-Thread Performance for the Multi-Core Era

    Get PDF
    Around 2003, newly activated power constraints caused single-thread performance growth to slow dramatically. The multi-core era was born with an emphasis on explicitly parallel software. Continuing to grow single-thread performance is still important in the multi-core context, but it must be done in an energy efficient way. One significant impediment to performance growth in both out-of-order and in-order processors is the long latency of last-level cache misses. Prior work introduced the idea of load latency tolerance---the ability to dynamically remove miss-dependent instructions from critical execution structures, continue execution under the miss, and re-execute miss-dependent instructions after the miss returns. However, previously proposed designs were unable to improve performance in an energy-efficient way---they introduced too many new large, complex structures and re-executed too many instructions. This dissertation describes a new load latency tolerant design that is both energy-efficient, and applicable to both in-order and out-of-order cores. Key novel features include formulation of slice re-execution as an alternative use of multi-threading support, efficient schemes for register and memory state management, and new pruning mechanisms for drastically reducing load latency tolerance\u27s dynamic execution overheads. Area analysis shows that energy-efficient load latency tolerance increases the footprint of an out-of-order core by a few percent, while cycle-level simulation shows that it significantly improves the performance of memory-bound programs. Energy-efficient load latency tolerance is more energy-efficient than---and synergistic with---existing performance technique like dynamic voltage and frequency scaling (DVFS)

    Using slicing techniques to support scalable rigorous analysis of class models

    Get PDF
    2015 Spring.Includes bibliographical references.Slicing is a reduction technique that has been applied to class models to support model comprehension, analysis, and other modeling activities. In particular, slicing techniques can be used to produce class model fragments that include only those elements needed to analyze semantic properties of interest. However, many of the existing class model slicing techniques do not take constraints (invariants and operation contracts) expressed in auxiliary constraint languages into consideration when producing model slices. Their applicability is thus limited to situations in which the determination of slices does not require information found in constraints. In this dissertation we describe our work on class model slicing techniques that take into consideration constraints expressed in the Object Constraint Language (OCL). The slicing techniques described in the dissertation can be used to produce model fragments that each consists of only the model elements needed to analyze specified properties. The slicing techniques are intended to enhance the scalability of class model analysis that involves (1) checking conformance between an object configuration and a class model with specified invariants and (2) analyzing sequences of operation invocations to uncover invariant violations. The slicing techniques are used to produce model fragments that can be analyzed separately. An evaluation we performed provides evidence that the proposed slicing techniques can significantly reduce the time to perform the analysis

    A Localized Autonomous Control Algorithm For Robots With Heterogeneous Capabilities In A Multi-Tier Architecture

    Get PDF
    This dissertation makes two contributions to the use of the Blackboard Architecture for command. The use of boundary nodes for data abstraction is introduced and the use of a solver-based blackboard system with pruning is proposed. It also makes contributions advancing the engineering design process in the area of command system selection for heterogeneous robotic systems. It presents and analyzes data informing decision making between centralized and distributed command systems and also characterizes the efficacy of pruning across different experimental scenarios, demonstrating when it is effective or not. Finally, it demonstrates the operations of the system, raising the technology readiness level (TRL) of the technology towards a level suitable for actual mission use. The context for this work is a multi-tier mission architecture, based on prior work by Fink on a “tier scalable” architecture. This work took a top-down approach where the superior tiers (in terms of scope of visibility) send specific commands to craft in lower tiers. While benefitting from the use of a large centralized processing center, this approach is limited in responding to failures and interference. The work presented herein has involved developing and comparatively characterizing centralized and decentralized (where superior nodes provide information and goals to the lower-level craft, but decisions are made locally) Blackboard Architecture based command systems. Blackboard Architecture advancements (a solver, pruning, boundary nodes) have been made and tested under multiple experimental conditions

    Working Notes from the 1992 AAAI Spring Symposium on Practical Approaches to Scheduling and Planning

    Get PDF
    The symposium presented issues involved in the development of scheduling systems that can deal with resource and time limitations. To qualify, a system must be implemented and tested to some degree on non-trivial problems (ideally, on real-world problems). However, a system need not be fully deployed to qualify. Systems that schedule actions in terms of metric time constraints typically represent and reason about an external numeric clock or calendar and can be contrasted with those systems that represent time purely symbolically. The following topics are discussed: integrating planning and scheduling; integrating symbolic goals and numerical utilities; managing uncertainty; incremental rescheduling; managing limited computation time; anytime scheduling and planning algorithms, systems; dependency analysis and schedule reuse; management of schedule and plan execution; and incorporation of discrete event techniques

    Conformance checking: A state-of-the-art literature review

    Full text link
    Conformance checking is a set of process mining functions that compare process instances with a given process model. It identifies deviations between the process instances' actual behaviour ("as-is") and its modelled behaviour ("to-be"). Especially in the context of analyzing compliance in organizations, it is currently gaining momentum -- e.g. for auditors. Researchers have proposed a variety of conformance checking techniques that are geared towards certain process model notations or specific applications such as process model evaluation. This article reviews a set of conformance checking techniques described in 37 scholarly publications. It classifies the techniques along the dimensions "modelling language", "algorithm type", "quality metric", and "perspective" using a concept matrix so that the techniques can be better accessed by practitioners and researchers. The matrix highlights the dimensions where extant research concentrates and where blind spots exist. For instance, process miners use declarative process modelling languages often, but applications in conformance checking are rare. Likewise, process mining can investigate process roles or process metrics such as duration, but conformance checking techniques narrow on analyzing control-flow. Future research may construct techniques that support these neglected approaches to conformance checking
    corecore