8 research outputs found

    A Constrained Object Model for Configuration Based Workflow Composition

    Full text link
    Automatic or assisted workflow composition is a field of intense research for applications to the world wide web or to business process modeling. Workflow composition is traditionally addressed in various ways, generally via theorem proving techniques. Recent research observed that building a composite workflow bears strong relationships with finite model search, and that some workflow languages can be defined as constrained object metamodels . This lead to consider the viability of applying configuration techniques to this problem, which was proven feasible. Constrained based configuration expects a constrained object model as input. The purpose of this document is to formally specify the constrained object model involved in ongoing experiments and research using the Z specification language.Comment: This is an extended version of the article published at BPM'05, Third International Conference on Business Process Management, Nancy Franc

    Language de RequĂȘte Configurable pour la Composition de Services Web Semantiques

    No full text
    La composition est un des principaux challenge pour la communautĂ© des services web sĂ©mantiques (SWS). Parmi les approches existantes, il a Ă©tĂ© montrĂ© efficace d'utiliser des techniques Ă  base de contraintes (telle que la configuration) pour crĂ©er des orchestrations Ă  partir des chorĂ©ographies. Des expĂ©rimentations supplĂ©mentaires ont rĂ©vĂ©lĂ© les limitations et les ambiguitĂ©s sĂ©mantiques qui peuvent survenir Ă  partir de la requĂȘte Ă  un composeur. Cet article propose une approche originale oĂč la requĂȘte est vue comme un problĂšme en lui-mĂȘme, et montre comment la configuration peut ĂȘtre utilisĂ©e pour le rĂ©soudre grĂące Ă  un modĂšle objet contraint

    Comparison of 3D Versus 4D Path Planning for Unmanned Aerial Vehicles

    Get PDF
    This research compares 3D versus 4D (three spatial dimensions and the time dimension) multi-objective and multi-criteria path-planning for unmanned aerial vehicles in complex dynamic environments. In this study, we empirically analyse the performances of 3D and 4D path planning approaches. Using the empirical data, we show that the 4D approach is superior over the 3D approach especially in complex dynamic environments. The research model consisting of flight objectives and criteria is developed based on interviews with an experienced military UAV pilot and mission planner to establish realism and relevancy in  unmanned aerial vehicle flight planning. Furthermore, this study incorporates one of the most comprehensive set of criteria identified during our literature search. The simulation results clearly show that the 4D path planning approach is able to provide solutions in complex dynamic environments in which the 3D approach could not find a solution

    A comparison of CMS Tier0-dataflow scenarios using the Yasper simulation tool

    Get PDF
    The CMS experiment at CERN will produce large amounts of data in short time periods. Because the data buffers at the experiment are not large enough, this data needs to be transferred to other storages. The CMS Tier0 will be an enormous job processing and storage facility at the CERN site. One part of this Tier0, called the Tier0 input buffer, has the task to readout the experiment data buffers and to supply these data to other tasks that need to be carried out with it (such as storing). It has to make sure that no data is lost. This thesis compares different scenarios to work with a set of disk servers in order to accomplish the Tier0 input buffer tasks. To increase the performance per disk server, write and read actions on the same disk server are separated. To find the optimal moments a disk server should change from accepting and writing items to supplying items to other tasks, the combination of various parameters, such as the usage of a particular queuing discipline (like FIFO, LIFO, LPTF and SPTF) and the state of the disk server has been studied. To make the actual comparisons a simulation of dataflow models of the different scenarios has been used. These simulations have been performed with the Yasper simulation tool. This tool uses Petri Net models as its input. To be more certain that the models represent the real situation, some model parts have been remodelled in a tool called GPSS. This tool is not using Petri Nets as its input model; instead it uses queuing models described in a special GPSS language. The results of the simulations show that the best queuing discipline to be used with the Tier0 input buffer is the LPTF discipline. In particular in combination with a change moment as soon as a disk server has been readout completely

    Learning for Classical Planning

    Get PDF
    This thesis is mainly about classical planning for artificial intelligence (AI). In planning, we deal with searching for a sequence of actions that changes the environment from a given initial state to a goal state. Planning problems in general are ones of the hardest problems not only in the area of AI, but in the whole computer science. Even though classical planning problems do not consider many aspects from the real world, their complexity reaches EXPSPACE-completeness. Nevertheless, there exist many planning systems (not only for classical planning) that were developed in the past, mainly thanks to the International Planning Competitions (IPC). Despite the current planning systems are very advanced, we have to boost these systems with additional knowledge provided by learning. In this thesis, we focused on developing learning techniques which produce additional knowledge from the training plans and transform it back into planning do mains and problems. We do not have to modify the planners. The contribution of this thesis is included in three areas. First, we provided theoretical background for plan analysis by investigating action dependencies or independencies. Second, we provided a method for generating macro-operators and removing unnecessary primitive operators. Experimental evaluation of this...Katedra teoretické informatiky a matematické logikyDepartment of Theoretical Computer Science and Mathematical LogicFaculty of Mathematics and PhysicsMatematicko-fyzikålní fakult

    A study in grid simulation and scheduling

    Get PDF
    Grid computing is emerging as an essential tool for large scale analysis and problem solving in scientific and business domains. Whilst the idea of stealing unused processor cycles is as old as the Internet, we are still far from reaching a position where many distributed resources can be seamlessly utilised on demand. One major issue preventing this vision is deciding how to effectively manage the remote resources and how to schedule the tasks amongst these resources. This thesis describes an investigation into Grid computing, specifically the problem of Grid scheduling. This complex problem has many unique features making it particularly difficult to solve and as a result many current Grid systems employ simplistic, inefficient solutions. This work describes the development of a simulation tool, G-Sim, which can be used to test the effectiveness of potential Grid scheduling algorithms under realistic operating conditions. This tool is used to analyse the effectiveness of a simple, novel scheduling technique in numerous scenarios. The results are positive and show that it could be applied to current procedures to enhance performance and decrease the negative effect of resource failure. Finally a conversion between the Grid scheduling problem and the classic computing problem SAT is provided. Such a conversion allows for the possibility of applying sophisticated SAT solving procedures to Grid scheduling providing potentially effective solutions

    Using Plan Decomposition for Continuing Plan Optimisation and Macro Generation

    No full text
    This thesis addresses three problems in the field of classical AI planning: decomposing a plan into meaningful subplans, continuing plan quality optimisation, and macro generation for efficient planning. The importance and difficulty of each of these problems is outlined below. (1) Decomposing a plan into meaningful subplans can facilitate a number of postplan generation tasks, including plan quality optimisation and macro generation – the two key concerns of this thesis. However, conventional plan decomposition techniques are often unable to decompose plans because they consider dependencies among steps, rather than subplans. (2) Finding high quality plans for large planning problems is hard. Planners that guarantee optimal, or bounded suboptimal, plan quality often cannot solve them In one experiment with the Genome Edit Distance domain optimal planners solved only 11.5% of problems. Anytime planners promise a way to successively produce better plans over time. However, current anytime planners tend to reach a limit where they stop finding any further improvement, and the plans produced are still very far from the best possible. In the same experiment, the LAMA anytime planner solved all problems but found plans whose average quality is 1.57 times worse than the best known. (3) Finding solutions quickly or even finding any solution for large problems within some resource constraint is also difficult. The best-performing planner in the 2014 international planning competition still failed to solve 29.3% of problems. Re-engineering a domain model by capturing and exploiting structural knowledge in the form of macros has been found very useful in speeding up planners. However, existing planner independent macro generation techniques often fail to capture some promising macro candidates because the constituent actions are not found in sequence in the totally ordered training plans. This thesis contributes to plan decomposition by developing a new plan deordering technique, named block deordering, that allows two subplans to be unordered even when their constituent steps cannot. Based on the block-deordered plan, this thesis further contributes to plan optimisation and macro generation, and their implementations in two systems, named BDPO2 and BloMa. Key to BDPO2 is a decomposition into subproblems of improving parts of the current best plan, rather than the plan as a whole. BDPO2 can be seen as an application of the large neighbourhood search strategy to planning. We use several windowing strategies to extract subplans from the block deordering of the current plan, and on-line learning for applying the most promising subplanners to the most promising subplans. We demonstrate empirically that even starting with the best plans found by other means, BDPO2 is still able to continue improving plan quality, and often produces better plans than other anytime planners when all are given enough runtime. BloMa uses an automatic planner independent technique to extract and filter “self-containe” subplans as macros from the block deordered training plans. These macros represent important longer activities useful to improve planners coverage and efficiency compared to the traditional macro generation approaches
    corecore