11,790 research outputs found

    Taming Numbers and Durations in the Model Checking Integrated Planning System

    Full text link
    The Model Checking Integrated Planning System (MIPS) is a temporal least commitment heuristic search planner based on a flexible object-oriented workbench architecture. Its design clearly separates explicit and symbolic directed exploration algorithms from the set of on-line and off-line computed estimates and associated data structures. MIPS has shown distinguished performance in the last two international planning competitions. In the last event the description language was extended from pure propositional planning to include numerical state variables, action durations, and plan quality objective functions. Plans were no longer sequences of actions but time-stamped schedules. As a participant of the fully automated track of the competition, MIPS has proven to be a general system; in each track and every benchmark domain it efficiently computed plans of remarkable quality. This article introduces and analyzes the most important algorithmic novelties that were necessary to tackle the new layers of expressiveness in the benchmark problems and to achieve a high level of performance. The extensions include critical path analysis of sequentially generated plans to generate corresponding optimal parallel plans. The linear time algorithm to compute the parallel plan bypasses known NP hardness results for partial ordering by scheduling plans with respect to the set of actions and the imposed precedence relations. The efficiency of this algorithm also allows us to improve the exploration guidance: for each encountered planning state the corresponding approximate sequential plan is scheduled. One major strength of MIPS is its static analysis phase that grounds and simplifies parameterized predicates, functions and operators, that infers knowledge to minimize the state description length, and that detects domain object symmetries. The latter aspect is analyzed in detail. MIPS has been developed to serve as a complete and optimal state space planner, with admissible estimates, exploration engines and branching cuts. In the competition version, however, certain performance compromises had to be made, including floating point arithmetic, weighted heuristic search exploration according to an inadmissible estimate and parameterized optimization

    Towards an HLA Run-time Infrastructure with Hard Real-time Capabilities

    Get PDF
    Our work takes place in the context of the HLA standard and its application in real-time systems context. The HLA standard is inadequate for taking into consideration the different constraints involved in real-time computer systems. Many works have been invested in order to providing real-time capabilities to Run Time Infrastructures (RTI) to run real time simulation. Most of these initiatives focus on major issues including QoS guarantee, Worst Case Transit Time (WCTT) knowledge and scheduling services provided by the underlying operating systems. Even if our ultimate objective is to achieve real-time capabilities for distributed HLA federations executions, this paper describes a preliminary work focusing on achieving hard real-time properties for HLA federations running on a single computer under Linux operating systems. Our paper proposes a novel global bottom up approach for designing real-time Run time Infrastructures and a formal model for validation of uni processor to (then) distributed real-time simulation with CERTI

    On the periodic behavior of real-time schedulers on identical multiprocessor platforms

    Full text link
    This paper is proposing a general periodicity result concerning any deterministic and memoryless scheduling algorithm (including non-work-conserving algorithms), for any context, on identical multiprocessor platforms. By context we mean the hardware architecture (uniprocessor, multicore), as well as task constraints like critical sections, precedence constraints, self-suspension, etc. Since the result is based only on the releases and deadlines, it is independent from any other parameter. Note that we do not claim that the given interval is minimal, but it is an upper bound for any cycle of any feasible schedule provided by any deterministic and memoryless scheduler

    Generalizing input-driven languages: theoretical and practical benefits

    Get PDF
    Regular languages (RL) are the simplest family in Chomsky's hierarchy. Thanks to their simplicity they enjoy various nice algebraic and logic properties that have been successfully exploited in many application fields. Practically all of their related problems are decidable, so that they support automatic verification algorithms. Also, they can be recognized in real-time. Context-free languages (CFL) are another major family well-suited to formalize programming, natural, and many other classes of languages; their increased generative power w.r.t. RL, however, causes the loss of several closure properties and of the decidability of important problems; furthermore they need complex parsing algorithms. Thus, various subclasses thereof have been defined with different goals, spanning from efficient, deterministic parsing to closure properties, logic characterization and automatic verification techniques. Among CFL subclasses, so-called structured ones, i.e., those where the typical tree-structure is visible in the sentences, exhibit many of the algebraic and logic properties of RL, whereas deterministic CFL have been thoroughly exploited in compiler construction and other application fields. After surveying and comparing the main properties of those various language families, we go back to operator precedence languages (OPL), an old family through which R. Floyd pioneered deterministic parsing, and we show that they offer unexpected properties in two fields so far investigated in totally independent ways: they enable parsing parallelization in a more effective way than traditional sequential parsers, and exhibit the same algebraic and logic properties so far obtained only for less expressive language families

    Conformance Checking Based on Multi-Perspective Declarative Process Models

    Full text link
    Process mining is a family of techniques that aim at analyzing business process execution data recorded in event logs. Conformance checking is a branch of this discipline embracing approaches for verifying whether the behavior of a process, as recorded in a log, is in line with some expected behaviors provided in the form of a process model. The majority of these approaches require the input process model to be procedural (e.g., a Petri net). However, in turbulent environments, characterized by high variability, the process behavior is less stable and predictable. In these environments, procedural process models are less suitable to describe a business process. Declarative specifications, working in an open world assumption, allow the modeler to express several possible execution paths as a compact set of constraints. Any process execution that does not contradict these constraints is allowed. One of the open challenges in the context of conformance checking with declarative models is the capability of supporting multi-perspective specifications. In this paper, we close this gap by providing a framework for conformance checking based on MP-Declare, a multi-perspective version of the declarative process modeling language Declare. The approach has been implemented in the process mining tool ProM and has been experimented in three real life case studies

    Algorithms for scheduling projects with generalized precedence relations.

    Get PDF
    Project scheduling under the assumption of renewable resource constraints and generalized precedence relations, i.e. arbitrary minimal and maximal time lags between the starting and completion times of the activities of the project, constitutes an important and challenging problem. Over the past few years considerable progress has been made in the use of exact solution procedure for this problem type and its variants. We review the fundamental logic and report new computational experience with a branch-and-bound procedure for optimally solving resource-constrained project scheduling problems with generalized precedence relations of the precedence diagramming type, i.e. start-start, start-finish, finish-start and finish-finish relations with minimal time lags for minimizing the project makespan. Subsequently, we review and report new results for several branch-and -bound procedures for the case of generalized precedence relations, including both minimal and maximal time lags, and demonstrate how the solution methodology can be expected to cope with other regular and nonregular objective functions such a smaximizing the net present value of a project.Networks; Problems; Scheduling; Algorithms; Functions; Net present value;

    Industrial and Tramp Ship Routing Problems: Closing the Gap for Real-Scale Instances

    Full text link
    Recent studies in maritime logistics have introduced a general ship routing problem and a benchmark suite based on real shipping segments, considering pickups and deliveries, cargo selection, ship-dependent starting locations, travel times and costs, time windows, and incompatibility constraints, among other features. Together, these characteristics pose considerable challenges for exact and heuristic methods, and some cases with as few as 18 cargoes remain unsolved. To face this challenge, we propose an exact branch-and-price (B&P) algorithm and a hybrid metaheuristic. Our exact method generates elementary routes, but exploits decremental state-space relaxation to speed up column generation, heuristic strong branching, as well as advanced preprocessing and route enumeration techniques. Our metaheuristic is a sophisticated extension of the unified hybrid genetic search. It exploits a set-partitioning phase and uses problem-tailored variation operators to efficiently handle all the problem characteristics. As shown in our experimental analyses, the B&P optimally solves 239/240 existing instances within one hour. Scalability experiments on even larger problems demonstrate that it can optimally solve problems with around 60 ships and 200 cargoes (i.e., 400 pickup and delivery services) and find optimality gaps below 1.04% on the largest cases with up to 260 cargoes. The hybrid metaheuristic outperforms all previous heuristics and produces near-optimal solutions within minutes. These results are noteworthy, since these instances are comparable in size with the largest problems routinely solved by shipping companies
    • …
    corecore