407 research outputs found

    A Graph-Based Semantics Workbench for Concurrent Asynchronous Programs

    Get PDF
    A number of novel programming languages and libraries have been proposed that offer simpler-to-use models of concurrency than threads. It is challenging, however, to devise execution models that successfully realise their abstractions without forfeiting performance or introducing unintended behaviours. This is exemplified by SCOOP---a concurrent object-oriented message-passing language---which has seen multiple semantics proposed and implemented over its evolution. We propose a "semantics workbench" with fully and semi-automatic tools for SCOOP, that can be used to analyse and compare programs with respect to different execution models. We demonstrate its use in checking the consistency of semantics by applying it to a set of representative programs, and highlighting a deadlock-related discrepancy between the principal execution models of the language. Our workbench is based on a modular and parameterisable graph transformation semantics implemented in the GROOVE tool. We discuss how graph transformations are leveraged to atomically model intricate language abstractions, and how the visual yet algebraic nature of the model can be used to ascertain soundness.Comment: Accepted for publication in the proceedings of FASE 2016 (to appear

    Grammar-Aware Question-Answering on Quantum Computers

    Full text link
    Natural language processing (NLP) is at the forefront of great advances in contemporary AI, and it is arguably one of the most challenging areas of the field. At the same time, with the steady growth of quantum hardware and notable improvements towards implementations of quantum algorithms, we are approaching an era when quantum computers perform tasks that cannot be done on classical computers with a reasonable amount of resources. This provides a new range of opportunities for AI, and for NLP specifically. Earlier work has already demonstrated a potential quantum advantage for NLP in a number of manners: (i) algorithmic speedups for search-related or classification tasks, which are the most dominant tasks within NLP, (ii) exponentially large quantum state spaces allow for accommodating complex linguistic structures, (iii) novel models of meaning employing density matrices naturally model linguistic phenomena such as hyponymy and linguistic ambiguity, among others. In this work, we perform the first implementation of an NLP task on noisy intermediate-scale quantum (NISQ) hardware. Sentences are instantiated as parameterised quantum circuits. We encode word-meanings in quantum states and we explicitly account for grammatical structure, which even in mainstream NLP is not commonplace, by faithfully hard-wiring it as entangling operations. This makes our approach to quantum natural language processing (QNLP) particularly NISQ-friendly. Our novel QNLP model shows concrete promise for scalability as the quality of the quantum hardware improves in the near future

    Identification of uncertainty sources in distributed hydrological modelling: Case study of the Grote Nete catchment in Belgium

    Get PDF
    The quest for good practice in modelling merits thorough and sustained attention since good practice increases the credibility and impact of the information, and insight that modelling seeks to generate. This paper presents the findings of an evaluation whose goal was to understand the uncertainty in applying a distributed hydrological model to the Grote Nete catchment in Flanders, Belgium. Uncertainties were selected for investigation depending on how significantly they affected the model’s decision variables. A Fault Tree was used to determine various combinations of inputs, mathematical code, and human error failures that could result in a specified risk. A combination of forward and backward approaches was used in developing the Fault Tree. Eleven events were identified as contributing to the top event. A total of 7 gates were used to describe the Fault Tree. A critical path analysis was carried out for the events and established their rank or order of significance. Three measures of importance were applied, namely the F-Vesely, the Birnbaum, and the B-Proschan importance measures. Model development of distributed models involves considerable uncertainty. Many of these dependencies arise naturally and their correct evaluation is crucial to the accurate analysis of the modelling system reliability.Keywords: distributed hydrological models, Grote Nete, MIKE SHE, uncertaint

    Geological parameterisation of petroleum reservoir models for improved uncertainty quantification

    Get PDF
    As uncertainty can never be removed from reservoir forecasts, the accurate quantification of uncertainty is the only appropriate method to make reservoir predictions. Bayes’ Theorem defines a framework by which the uncertainty in a reservoir can be ascertained by updating prior definitions of uncertainty with the mismatch between our simulation models and the measured production data. In the simplest version of the Bayesian methodology we assume that a realistic representation our field exists as a particular combination of model parameters from a set of uniform prior ranges. All models are believed to be initially equally likely, but are updated to new values of uncertainty based on the misfit between the historical and production data. Furthermore, most effort in reservoir uncertainty quantification and automated history matching has been applied to non-geological model parameters, preferring to leave the geological aspects of the reservoir static. While such an approach is the easiest to apply, the reality is that the majority of the reservoir uncertainty is sourced from the geological aspects of the reservoir, therefore geological parameters should be included in the prior and those priors should be conditioned to include the full amount of geological knowledge so as to remove combinations that are not possible in nature. This thesis develops methods of geological parameterisation to capture geological features and assess the impact of geologically derived non-uniform prior definitions and the choice of modelling method/interpretation on the quantification of uncertainty. A number of case studies are developed, using synthetic models and a real field data set, that show the inclusion of geological prior data reduces the amount of quantified uncertainty and improves the performance of sampling. The framework allows the inclusion of any data type, to reflect the variety of geological information sources. ii Errors in the interpretation of the geology and/or the choice of an appropriate modelling method have an impact on the quantified uncertainty. In the cases developed in this thesis all models were able to produce good history matches, but the differences in the models lead to differences in the amount of quantified uncertainty. The result is that each quantification would lead to different development decisions and that the a combination of several models may be required when a single modelling approach cannot be defined. The overall conclusion to the work is that geological prior data should be used in uncertainty quantification to reduce the uncertainty in forecasts by preventing bias from non-realistic models

    A Real-world Case Study of Process and Data Driven Predictive Analytics for Manufacturing Workflows

    Get PDF
    We present a novel application of business process modelling and simulation of manufacturing workflows. Using formal methods, we produce correct-by-construction executable models that can be simulated in an interleaved way. The simulation draws advanced analytics from live IoT monitoring as well as an ERP system to provide predictive business intelligence. We describe our process and resource modelling efforts in the context of a collaborative project with two manufacturing partners. We evaluate our results based on the improvement of the scheduling accuracy for real production flows

    Metaheuristics For Solving Real World Employee Rostering and Shift Scheduling Problems

    Get PDF
    Optimising resources and making considerate decisions are central concerns in any responsible organisation aiming to succeed in efficiently achieving their goals. Careful use of resources can have positive outcomes in the form of fiscal savings, improved service levels, better quality products, improved awareness of diminishing returns and general output efficiency, regardless of field. Operational research techniques are advanced analytical tools used to improve managerial decision-making. There have been a variety of case studies where operational research techniques have been successfully applied to save millions of pounds. Operational research techniques have been successfully applied to a multitude of fields, including agriculture, policing, defence, conservation, air traffic control, and many more. In particular, management of resources in the form of employees is a challenging problem --- but one with the potential for huge improvements in efficiency. The problem this thesis tackles can be divided into two sub-problems; the personalised shift scheduling & employee rostering problem, and the roster pattern problem. The personalised shift scheduling & employee rostering problem involves the direct scheduling of employees to hours and days of week. This allows the creation of schedules which are tailored to individuals and allows a fine level over control over the results, but with at the cost of a large and challenging search space. The roster pattern problem instead takes existing patterns employees currently work, and uses these as a pool of potential schedules to be used. This reduces the search space but minimises the number of changes to existing employee schedules, which is preferable for personnel satisfaction. Existing research has shown that a variety of algorithms suit different problems and hybrid methods are found to typically outperform standalone ones in real-world contexts. Several algorithmic approaches for solving variations of the employee scheduling problem are considered in this thesis. Initially a VNS approach was used with a Metropolis-Hastings acceptance criterion. The second approach utilises ER&SR controlled by the EMCAC, which has only been used in the field of exam timetabling, and has not before been used within the domain of employee scheduling and rostering. ER&SR was then hybridised with our initial approach, producing ER&SR with VNS. Finally, ER&SR was hybridised into a matheuristic with Integer Programming and compared to the hybrid's individual components. A contribution of this thesis is evidence that the algorithm ER&SR has merit outside of the original sub-field of exam scheduling, and can be applied to shift scheduling and employee rostering. Further, ER&SR was hybridised and schedules produced by the hybridisations were found to be of higher quality than the standalone algorithm. In the literature review it was found that hybrid algorithms have become more popular in real-world problems in recent years, and this body of work has explored and continued this trend. Problem formulations in this thesis provide insight into creating constraints which satisfy the need for minimising employee dissatisfaction, particularly in regards to abrupt change. The research presented in this thesis has positively impacted a multinational and multibillion dollar field service operations company. This has been achieved by implementing a variety of techniques, including metaheuristics and a matheuristic, to schedule shifts and roster employees over a period of several months. This thesis showcases the research outputs by this project, and highlights the real-world impact of this research
    corecore