1,213 research outputs found
Dynamic Controllability Made Simple
Simple Temporal Networks with Uncertainty (STNUs) are a well-studied model for representing temporal constraints, where some intervals (contingent links) have an unknown but bounded duration, discovered only during execution. An STNU is dynamically controllable (DC) if there exists a strategy to execute its time-points satisfying all the constraints, regardless of the actual duration of contingent links revealed during execution.
In this work we present a new system of constraint propagation rules for STNUs, which is sound-and-complete for DC checking. Our system comprises just three rules which, differently from the ones proposed in all previous works, only generate unconditioned constraints. In particular, after applying our sound rules, the network remains an STNU in all respects. Moreover, our completeness proof is short and non-algorithmic, based on the explicit construction of a valid execution strategy. This is a substantial simplification of the theory which underlies all the polynomial-time algorithms for DC-checking.
Our analysis also shows: (1) the existence of late execution strategies for STNUs, (2) the equivalence of several variants of the notion of DC, (3) the existence of a fast algorithm for real-time execution of STNUs, which runs in O(KN) total time in a network with K contingent links and N time points, considerably improving the previous O(N^3)-time bound
Checking Dynamic Consistency of Conditional Hyper Temporal Networks via Mean Payoff Games (Hardness and (pseudo) Singly-Exponential Time Algorithm)
In this work we introduce the \emph{Conditional Hyper Temporal Network
(CHyTN)} model, which is a natural extension and generalization of both the
\CSTN and the \HTN model. Our contribution goes as follows. We show that
deciding whether a given \CSTN or CHyTN is dynamically consistent is
\coNP-hard. Then, we offer a proof that deciding whether a given CHyTN is
dynamically consistent is \PSPACE-hard, provided that the input instances are
allowed to include both multi-head and multi-tail hyperarcs. In light of this,
we continue our study by focusing on CHyTNs that allow only multi-head or only
multi-tail hyperarcs, and we offer the first deterministic (pseudo)
singly-exponential time algorithm for the problem of checking the
dynamic-consistency of such CHyTNs, also producing a dynamic execution strategy
whenever the input CHyTN is dynamically consistent. Since \CSTN{s} are a
special case of CHyTNs, this provides as a byproduct the first
sound-and-complete (pseudo) singly-exponential time algorithm for checking
dynamic-consistency in CSTNs. The proposed algorithm is based on a novel
connection between CSTN{s}/CHyTN{s} and Mean Payoff Games. The presentation of
the connection between \CSTN{s}/CHyTNs and \MPG{s} is mediated by the \HTN
model. In order to analyze the algorithm, we introduce a refined notion of
dynamic-consistency, named -dynamic-consistency, and present a sharp
lower bounding analysis on the critical value of the reaction time
where a \CSTN/CHyTN transits from being, to not being,
dynamically consistent. The proof technique introduced in this analysis of
is applicable more generally when dealing with linear
difference constraints which include strict inequalities.Comment: arXiv admin note: text overlap with arXiv:1505.0082
it could rain weather forecasting as a reasoning process
Abstract Meteorological forecasting is the process of providing reliable prediction about the future weathear within a given interval of time. Forecasters adopt a model of reasoning that can be mapped onto an integrated conceptual framework. A forecaster essentially precesses data in advance by using some models of machine learning to extract macroscopic tendencies such as air movements, pressure, temperature, and humidity differentials measured in ways that depend upon the model, but fundamentally, as gradients. Limit values are employed to transform these tendencies in fuzzy values, and then compared to each other in order to extract indicators, and then evaluate these indicators by means of priorities based upon distance in fuzzy values. We formalise the method proposed above in a workflow of evaluation steps, and propose an architecture that implements the reasoning techniques
Quantum Programming Made Easy
We present IQu, namely a quantum programming language that extends Reynold's
Idealized Algol, the paradigmatic core of Algol-like languages. IQu combines
imperative programming with high-order features, mediated by a simple type
theory. IQu mildly merges its quantum features with the classical programming
style that we can experiment through Idealized Algol, the aim being to ease a
transition towards the quantum programming world. The proposed extension is
done along two main directions. First, IQu makes the access to quantum
co-processors by means of quantum stores. Second, IQu includes some support for
the direct manipulation of quantum circuits, in accordance with recent trends
in the development of quantum programming languages. Finally, we show that IQu
is quite effective in expressing well-known quantum algorithms.Comment: In Proceedings Linearity-TLLA 2018, arXiv:1904.0615
Adaptive Time- and Process-Aware Information Systems
For the digitized enterprise the proper handling of the temporal aspects of its business processes is vital. Delivery times, appointments and deadlines must be met, processing times and durations be monitored, and optimization objectives shall be pursued. However, contemporary Process-Aware Information Systems (PAISs)--the go-to solution for the computer-aided support of business processesâstill lack a sophisticated support of the time perspective. Hence, there is a high demand for a more profound support of temporal aspects in PAISs. Accordingly, both the specification and the operational support of temporal aspects constitute fundamental challenges for the further development and dissemination of PAISs. The aim of this thesis is to propose a framework for supporting the time perspective of business processes in PAISs. As PAISs enable the design, execution and evolution of business processes, the designated framework must support these three fundamental phases of the process life cycle.
The ATAPIS framework proposed by this thesis essentially comprises three major com-ponents.
First, a universal and comprehensive set of time patterns is provided. Respective time patterns represent temporal concepts commonly found in business processes and are based on empirical evidence. In particular, they provide a universal and comprehensive set of notions for describing temporal aspects in business processes. Moreover, a precise formal semantics for each of the time patterns is provided based on an in-depth analysis of a large set of real-world use cases. Respective formal semantics enable the proper integration of the time patterns into PAISs. In turn, the latter will allow for the specification of time-aware process schemas.
Second, a generic framework for implementing the time patterns based on their formal semantics is developed. The framework and its techniques enable the verification of time-aware process schemas regarding their temporal consistency, i. e., their ability to be successfully executed without violating any of their temporal constraints. Subsequently, the framework is extended to consider advanced aspects like the contingent nature of activity durations and alternative execution paths as well. Moreover, an algorithm as well as techniques for executing and monitoring time-aware process instances in PAISs is provided. Based on the presented concepts, it becomes possible to ensure that a time-aware process instance may be executed without violating any of its temporal constraints.
Third, a set of change operations for dynamically modifying time-aware process instances during run time is suggested. Respective change operations ensure that a modified time-aware process instance remains temporally consistent after the respective modification. Moreover, to reduce the complexity involved when applying multiple change operations a sophisticated approximation-based technique is presented. Overall, the developed change operations allow providing the flexibility required by business processes in practice.
Altogether, the ATAPIS framework provides fundamental concepts, techniques and algorithms for integrating the time perspective into PAISs. As beauty of this framework the specification, execution and evolution of business processes is supported by an integrated approach
Risk-aware shielding of Partially Observable Monte Carlo Planning policies
Partially Observable Monte Carlo Planning (POMCP) is a powerful online algorithm that can generate approximate policies for large Partially Observable Markov Decision Processes. The online nature of this method supports scalability by avoiding complete policy representation. However, the lack of an explicit policy representation hinders interpretability and a proper evaluation of the risks an agent may incur. In this work, we propose a methodology based on Maximum SatisïŹability Modulo Theory (MAX-SMT) for analyzing POMCP policies by inspecting their traces, namely, sequences of belief- action pairs generated by the algorithm. The proposed method explores local properties of the policy to build a compact and informative summary of the policy behaviour. Moreover, we introduce a rich and formal language that a domain expert can use to describe the expected behaviour of a policy. In more detail, we present a formulation that directly computes the risk involved in taking actions by considering the high- level elements speciïŹed by the expert. The ïŹnal formula can identify risky decisions taken by POMCP that violate the expert indications. We show that this identiïŹcation process can be used oïŹine (to improve the policyâs explainability and identify anomalous behaviours) or online (to shield the risky decisions of the POMCP algorithm). We present an extended evaluation of our approach on four domains: the well-known tiger and rocksample benchmarks, a problem of velocity regulation in mobile robots, and a problem of battery management in mobile robots. We test the methodology against a state-of- the-art anomaly detection algorithm to show that our approach can be used to identify anomalous behaviours in faulty POMCP. We also show, comparing the performance of shielded and unshielded POMCP, that the shielding mechanism can improve the systemâs performance. We provide an open-source implementation of the proposed methodologies at https://github.com/GiuMaz/XPOMCP
28th International Symposium on Temporal Representation and Reasoning (TIME 2021)
The 28th International Symposium on Temporal Representation and Reasoning (TIME 2021) was planned to take place in Klagenfurt, Austria, but had to move to an online conference due to the insecurities and restrictions caused by the pandemic. Since its frst edition in 1994, TIME Symposium is quite unique in the panorama of the scientifc conferences as its main goal is to bring together researchers from distinct research areas involving the management and representation of temporal data as well as the reasoning about temporal aspects of information. Moreover, TIME Symposium aims to bridge theoretical and applied research, as well as to serve as an interdisciplinary forum for exchange among researchers from the areas of artifcial intelligence, database management, logic and verifcation, and beyond
JUpdate: A JSON Update Language
Although JSON documents are being used in several emerging applications (e.g., Big Data applications, IoT, mobile computing, smart cities, and online social networks), there is no consensual or standard language for updating JSON documents (i.e., creating, deleting or changing such documents, where changing means inserting, deleting, replacing, copying, moving, etc., portions of data in such documents). To fill this gap, we propose in this paper an SQL-like language, named JUpdate, for updating JSON documents. JUpdate is based on a set of six primitive update operations, which is proven complete and minimal, and it provides a set of fourteen user-friendly high-level operations with a well-founded semantics defined on the basis of the primitive update operations
Machine understanding surgical actions from intervention procedure textbooks
The automatic extraction of procedural surgical knowledge from surgery manuals, academic papers or other high-quality textual resources, is of the utmost importance to develop knowledge-based clinical decision support systems, to automatically execute some procedureâs step or to summarize the procedural information, spread throughout the texts, in a structured form usable as a study resource by medical students. In this work, we propose a first benchmark on extracting detailed surgical actions from available intervention procedure textbooks and papers. We frame the problem as a Semantic Role Labeling task. Exploiting a manually annotated dataset, we apply different Transformer-based information extraction methods. Starting from RoBERTa and BioMedRoBERTa pre-trained language models, we first investigate a zero-shot scenario and compare the obtained results with a full fine-tuning setting. We then introduce a new ad-hoc surgical language model, named SurgicBERTa, pre-trained on a large collection of surgical materials, and we compare it with the previous ones. In the assessment, we explore different dataset splits (one in-domain and two out-of-domain) and we investigate also the effectiveness of the approach in a few-shot learning scenario. Performance is evaluated on three correlated sub-tasks: predicate disambiguation, semantic argument disambiguation and predicate-argument disambiguation. Results show that the fine-tuning of a pre-trained domain-specific language model achieves the highest performance on all splits and on all sub-tasks. All models are publicly released
- âŠ