346 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Measuring the impact of COVID-19 on hospital care pathways
Care pathways in hospitals around the world reported significant disruption during the recent COVID-19 pandemic but measuring the actual impact is more problematic. Process mining can be useful for hospital management to measure the conformance of real-life care to what might be considered normal operations. In this study, we aim to demonstrate that process mining can be used to investigate process changes associated with complex disruptive events. We studied perturbations to accident and emergency (A &E) and maternity pathways in a UK public hospital during the COVID-19 pandemic. Co-incidentally the hospital had implemented a Command Centre approach for patient-flow management affording an opportunity to study both the planned improvement and the disruption due to the pandemic. Our study proposes and demonstrates a method for measuring and investigating the impact of such planned and unplanned disruptions affecting hospital care pathways. We found that during the pandemic, both A &E and maternity pathways had measurable reductions in the mean length of stay and a measurable drop in the percentage of pathways conforming to normative models. There were no distinctive patterns of monthly mean values of length of stay nor conformance throughout the phases of the installation of the hospital’s new Command Centre approach. Due to a deficit in the available A &E data, the findings for A &E pathways could not be interpreted
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
Away From Linear Models of Concurrent Programs
Traditional approaches to imperative programming language semantics rely on first defining how each individual statement modifies the memory state, and then composing these definitions into a whole program via the interpretation of the sequential composition operator: the humble semicolon. The creation of the multiprocessor and advent of parallelism began to challenge this model. No longer was a program a single, linear sequence of statements, but it had statements which might occur in one order or another, or even simultaneously. To add to the complexity, compilers and hardware began to optimise their input programs, reordering and removing statements to improve runtime performance. The resulting stack of transformations and complications caused runtime executions to drift progressively further away from the program that a programmer believed they were writing. Several approaches to this have appeared: process calculi which forbid processes from sharing memory and instead force them to communicate directly, maintaining sequential consistency, in which an execution must at least appear to be respecting the ordered sequence of statements model, and permitting weak memory ordering, in which an execution must maintain orders involving explicitly synchronised accesses but is free to reorder everything else. While weak memory is preferred by engineers building high-performance code, due to the relatively high cost of both passing messages and maintaining sequential consistency, the problem of creating a sound weak memory semantics for a real-world programming language with shared memory concurrency has yet to be fully solved. Here we present a weakly ordered semantics for shared memory concurrency, given as an extension to a previously published model. We show that the existing model can be integrated into reasoning techniques which rely on an operational semantics, and that program transformations which cannot introduce new behaviours can be expressed as a relation over the objects of this semantics. We then add a layer of abstraction to the model which allows us to represent dynamic memory allocation in a weak memory context for the first time
A Mathematical Characterization of Minimally Sufficient Robot Brains
This paper addresses the lower limits of encoding and processing the
information acquired through interactions between an internal system (robot
algorithms or software) and an external system (robot body and its environment)
in terms of action and observation histories. Both are modeled as transition
systems. We want to know the weakest internal system that is sufficient for
achieving passive (filtering) and active (planning) tasks. We introduce the
notion of an information transition system for the internal system which is a
transition system over a space of information states that reflect a robot's or
other observer's perspective based on limited sensing, memory, computation, and
actuation. An information transition system is viewed as a filter and a policy
or plan is viewed as a function that labels the states of this information
transition system. Regardless of whether internal systems are obtained by
learning algorithms, planning algorithms, or human insight, we want to know the
limits of feasibility for given robot hardware and tasks. We establish, in a
general setting, that minimal information transition systems exist up to
reasonable equivalence assumptions, and are unique under some general
conditions. We then apply the theory to generate new insights into several
problems, including optimal sensor fusion/filtering, solving basic planning
tasks, and finding minimal representations for modeling a system given
input-output relations.Comment: arXiv admin note: text overlap with arXiv:2212.0052
LIPIcs, Volume 274, ESA 2023, Complete Volume
LIPIcs, Volume 274, ESA 2023, Complete Volum
Caching, crashing & concurrency - verification under adverse conditions
The formal development of large-scale software systems is a complex and time-consuming effort. Generally, its main goal is to prove the functional correctness of the resulting system. This goal becomes significantly harder to reach when the verification must be performed under adverse conditions. When aiming for a realistic system, the implementation must be compatible with the “real world”: it must work with existing system interfaces, cope with uncontrollable events such as power cuts, and offer competitive performance by using mechanisms like caching or concurrency.
The Flashix project is an example of such a development, in which a fully verified file system for flash memory has been developed. The project is a long-term team effort and resulted in a sequential, functionally correct and crash-safe implementation after its first project phase. This thesis continues the work by performing modular extensions to the file system with performance-oriented mechanisms that mainly involve caching and concurrency, always considering crash-safety.
As a first contribution, this thesis presents a modular verification methodology for destructive heap algorithms. The approach simplifies the verification by separating reasoning about specifics of heap implementations, like pointer aliasing, from the reasoning about conceptual correctness arguments.
The second contribution of this thesis is a novel correctness criterion for crash-safe, cached, and concurrent file systems. A natural criterion for crash-safety is defined in terms of system histories, matching the behavior of fine-grained caches using complex synchronization mechanisms that reorder operations.
The third contribution comprises methods for verifying functional correctness and crash-safety of caching mechanisms and concurrency in file systems. A reference implementation for crash-safe caches of high-level data structures is given, and a strategy for proving crash-safety is demonstrated and applied. A compatible concurrent implementation of the top layer of file systems is presented, using a mechanism for the efficient management of fine-grained file locking, and a concurrent version of garbage collection is realized. Both concurrency extensions are proven to be correct by applying atomicity refinement, a methodology for proving linearizability.
Finally, this thesis contributes a new iteration of executable code for the Flashix file system. With the efficiency extensions introduced with this thesis, Flashix covers all performance-oriented concepts of realistic file system implementations and achieves competitiveness with state-of-the-art flash file systems
Research Paper: Process Mining and Synthetic Health Data: Reflections and Lessons Learnt
Analysing the treatment pathways in real-world health data can provide valuable insight for clinicians and decision-makers. However, the procedures for acquiring real-world data for research can be restrictive, time-consuming and risks disclosing identifiable information. Synthetic data might enable representative analysis without direct access to sensitive data. In the first part of our paper, we propose an approach for grading synthetic data for process analysis based on its fidelity to relationships found in real-world data. In the second part, we apply our grading approach by assessing cancer patient pathways in a synthetic healthcare dataset (The Simulacrum provided by the English National Cancer Registration and Analysis Service) using process mining. Visualisations of the patient pathways within the synthetic data appear plausible, showing relationships between events confirmed in the underlying non-synthetic data. Data quality issues are also present within the synthetic data which reflect real-world problems and artefacts from the synthetic dataset’s creation. Process mining of synthetic data in healthcare is an emerging field with novel challenges. We conclude that researchers should be aware of the risks when extrapolating results produced from research on synthetic data to real-world scenarios and assess findings with analysts who are able to view the underlying data
- …