672 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Measuring the impact of COVID-19 on hospital care pathways

    Get PDF
    Care pathways in hospitals around the world reported significant disruption during the recent COVID-19 pandemic but measuring the actual impact is more problematic. Process mining can be useful for hospital management to measure the conformance of real-life care to what might be considered normal operations. In this study, we aim to demonstrate that process mining can be used to investigate process changes associated with complex disruptive events. We studied perturbations to accident and emergency (A &E) and maternity pathways in a UK public hospital during the COVID-19 pandemic. Co-incidentally the hospital had implemented a Command Centre approach for patient-flow management affording an opportunity to study both the planned improvement and the disruption due to the pandemic. Our study proposes and demonstrates a method for measuring and investigating the impact of such planned and unplanned disruptions affecting hospital care pathways. We found that during the pandemic, both A &E and maternity pathways had measurable reductions in the mean length of stay and a measurable drop in the percentage of pathways conforming to normative models. There were no distinctive patterns of monthly mean values of length of stay nor conformance throughout the phases of the installation of the hospital’s new Command Centre approach. Due to a deficit in the available A &E data, the findings for A &E pathways could not be interpreted

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Adaptive Intelligent Systems for Extreme Environments

    Get PDF
    As embedded processors become powerful, a growing number of embedded systems equipped with artificial intelligence (AI) algorithms have been used in radiation environments to perform routine tasks to reduce radiation risk for human workers. On the one hand, because of the low price, commercial-off-the-shelf devices and components are becoming increasingly popular to make such tasks more affordable. Meanwhile, it also presents new challenges to improve radiation tolerance, the capability to conduct multiple AI tasks and deliver the power efficiency of the embedded systems in harsh environments. There are three aspects of research work that have been completed in this thesis: 1) a fast simulation method for analysis of single event effect (SEE) in integrated circuits, 2) a self-refresh scheme to detect and correct bit-flips in random access memory (RAM), and 3) a hardware AI system with dynamic hardware accelerators and AI models for increasing flexibility and efficiency. The variances of the physical parameters in practical implementation, such as the nature of the particle, linear energy transfer and circuit characteristics, may have a large impact on the final simulation accuracy, which will significantly increase the complexity and cost in the workflow of the transistor level simulation for large-scale circuits. It makes it difficult to conduct SEE simulations for large-scale circuits. Therefore, in the first research work, a new SEE simulation scheme is proposed, to offer a fast and cost-efficient method to evaluate and compare the performance of large-scale circuits which subject to the effects of radiation particles. The advantages of transistor and hardware description language (HDL) simulations are combined here to produce accurate SEE digital error models for rapid error analysis in large-scale circuits. Under the proposed scheme, time-consuming back-end steps are skipped. The SEE analysis for large-scale circuits can be completed in just few hours. In high-radiation environments, bit-flips in RAMs can not only occur but may also be accumulated. However, the typical error mitigation methods can not handle high error rates with low hardware costs. In the second work, an adaptive scheme combined with correcting codes and refreshing techniques is proposed, to correct errors and mitigate error accumulation in extreme radiation environments. This scheme is proposed to continuously refresh the data in RAMs so that errors can not be accumulated. Furthermore, because the proposed design can share the same ports with the user module without changing the timing sequence, it thus can be easily applied to the system where the hardware modules are designed with fixed reading and writing latency. It is a challenge to implement intelligent systems with constrained hardware resources. In the third work, an adaptive hardware resource management system for multiple AI tasks in harsh environments was designed. Inspired by the “refreshing” concept in the second work, we utilise a key feature of FPGAs, partial reconfiguration, to improve the reliability and efficiency of the AI system. More importantly, this feature provides the capability to manage the hardware resources for deep learning acceleration. In the proposed design, the on-chip hardware resources are dynamically managed to improve the flexibility, performance and power efficiency of deep learning inference systems. The deep learning units provided by Xilinx are used to perform multiple AI tasks simultaneously, and the experiments show significant improvements in power efficiency for a wide range of scenarios with different workloads. To further improve the performance of the system, the concept of reconfiguration was further extended. As a result, an adaptive DL software framework was designed. This framework can provide a significant level of adaptability support for various deep learning algorithms on an FPGA-based edge computing platform. To meet the specific accuracy and latency requirements derived from the running applications and operating environments, the platform may dynamically update hardware and software (e.g., processing pipelines) to achieve better cost, power, and processing efficiency compared to the static system

    Integrality and cutting planes in semidefinite programming approaches for combinatorial optimization

    Get PDF
    Many real-life decision problems are discrete in nature. To solve such problems as mathematical optimization problems, integrality constraints are commonly incorporated in the model to reflect the choice of finitely many alternatives. At the same time, it is known that semidefinite programming is very suitable for obtaining strong relaxations of combinatorial optimization problems. In this dissertation, we study the interplay between semidefinite programming and integrality, where a special focus is put on the use of cutting-plane methods. Although the notions of integrality and cutting planes are well-studied in linear programming, integer semidefinite programs (ISDPs) are considered only recently. We show that manycombinatorial optimization problems can be modeled as ISDPs. Several theoretical concepts, such as the Chvátal-Gomory closure, total dual integrality and integer Lagrangian duality, are studied for the case of integer semidefinite programming. On the practical side, we introduce an improved branch-and-cut approach for ISDPs and a cutting-plane augmented Lagrangian method for solving semidefinite programs with a large number of cutting planes. Throughout the thesis, we apply our results to a wide range of combinatorial optimization problems, among which the quadratic cycle cover problem, the quadratic traveling salesman problem and the graph partition problem. Our approaches lead to novel, strong and efficient solution strategies for these problems, with the potential to be extended to other problem classes

    Design of General Purpose Minimal-Auxiliary Ising Machines

    Full text link
    Ising machines are a form of quantum-inspired processing-in-memory computer which has shown great promise for overcoming the limitations of traditional computing paradigms while operating at a fraction of the energy use. The process of designing Ising machines is known as the reverse Ising problem. Unfortunately, this problem is in general computationally intractable: it is a nonconvex mixed-integer linear programming problem which cannot be naively brute-forced except in the simplest cases due to exponential scaling of runtime with number of spins. We prove new theoretical results which allow us to reduce the search space to one with quadratic scaling. We utilize this theory to develop general purpose algorithmic solutions to the reverse Ising problem. In particular, we demonstrate Ising formulations of 3-bit and 4-bit integer multiplication which use fewer total spins than previously known methods by a factor of more than three. Our results increase the practicality of implementing such circuits on modern Ising hardware, where spins are at a premium.Comment: 14 pages, 3 figures, submitted to IEEE International Conference on Rebooting Computing 202

    50 Years of quantum chromodynamics – Introduction and Review

    Get PDF

    Unprovability of strong complexity lower bounds in bounded arithmetic

    Get PDF
    While there has been progress in establishing the unprovability of complexity statements in lower fragments of bounded arithmetic, understanding the limits of Jeˇr ́abek’s theory APC1 [Jeˇr07a] and of higher levels of Buss’s hierarchy Si 2 [Bus86] has been a more elusive task. Even in the more restricted setting of Cook’s theory PV [Coo75], known results often rely on a less natural formalization that encodes a complexity statement using a collection of sentences instead of a single sentence. This is done to reduce the quantifier complexity of the resulting sentences so that standard witnessing results can be invoked. In this work, we establish unprovability results for stronger theories and for sentences of higher quantifier complexity. In particular, we unconditionally show that APC1 cannot prove strong complexity lower bounds separating the third level of the polynomial hierarchy. In more detail, we consider non-uniform average-case separations, and establish that APC1 cannot prove a sentence stating that ∀n ≥ n0 ∃ fn ∈ Π3-SIZE[nd] that is (1/n)-far from every Σ3-SIZE[2nδ] circuit. This is a consequence of a much more general result showing that, for every i ≥ 1, strong separations for Πi-SIZE[poly(n)] versus Σi-SIZE[2nΩ(1)] cannot be proved in the theory Ti PV consisting of all true ∀Σb i−1- sentences in the language of Cook’s theory PV. Our argument employs a convenient game-theoretic witnessing result that can be applied to sentences of arbitrary quantifier complexity. We combine it with extensions of a technique introduced by Kraj ́ıˇcek [Kra11] that was recently employed by Pich and Santhanam [PS21] to establish the unprovability of lower bounds in PV (i.e., the case i = 1 above, but under a weaker formalization) and in a fragment of APC1

    Quantum simulation of confinement dynamics

    Get PDF
    Quantum computers have generated much excitement over recent years due to their potential to outperform classical computers in many difficult problems. While a fully fault tolerant quantum device is yet to be built, there has been much work pushing for noisy intermediate-scale quantum (NISQ) devices to achieve quantum advantage. One of the most promising fields to accomplish this is in quantum simulation of quantum many-body systems. A classical computer is able to simulate a general quantum system but suffers from exponential memory requirements in system size. Thus, for exact results, classical computers are limited to simulating just tens of particles whereas realistic quantum systems are comprised of 1023\sim 10^{23}. Quantum computers are able to reduce this memory cost to polynomial growth making them key to understand the physics of many-body systems. One area that is notably difficult to simulate is confinement physics. Confinement is the phenomenon in which the energy of two particles grows indefinitely with their separation - most prominently found between quarks in quantum chromodynamics (QCD). In this work we will consider the application of quantum devices to simulate such phenomena. In particular, we consider simple condensed matter systems, namely variations of the Ising model, that exhibit confinement physics. In the first half of this work we perform an analytical and numerical study of confinement, and develop a trotterization protocol to enable the quantum simulation of such physics on a digital quantum computer. We present results obtained directly on an IBM quantum computer showing the non-equilibrium effects of confinement in such systems. In order to achieve these results we developed state-of-the-art error mitigation methods to combat the large errors inherently faced in current NISQ devices. In the latter half, we propose physical phenomena that may act as a benchmark for quantum devices in the future. Collisions of mesons (boundstates of two particles) with impurities are considered in which a long-lived metastable state is found to form. Such collisions have potential to be simulated on digital quantum computers in the near future. We then consider collisions of mesons in systems with long-range interactions. We show how collisions of interacting mesons can lead to the formation of hadrons (boundstates of many constituent particles) in a fusion type event. While these proposals are beyond current digital quantum computer capabilities, analogue quantum simulation devices such as trapped ion setups or Rydberg atom experiments are well suited to realise this physics.Open Acces

    Transformers Learn Shortcuts to Automata

    Full text link
    Algorithmic reasoning requires capabilities which are most naturally understood through recurrent models of computation, like the Turing machine. However, Transformer models, while lacking recurrence, are able to perform such reasoning using far fewer layers than the number of reasoning steps. This raises the question: what solutions are learned by these shallow and non-recurrent models? We find that a low-depth Transformer can represent the computations of any finite-state automaton (thus, any bounded-memory algorithm), by hierarchically reparameterizing its recurrent dynamics. Our theoretical results characterize shortcut solutions, whereby a Transformer with o(T)o(T) layers can exactly replicate the computation of an automaton on an input sequence of length TT. We find that polynomial-sized O(logT)O(\log T)-depth solutions always exist; furthermore, O(1)O(1)-depth simulators are surprisingly common, and can be understood using tools from Krohn-Rhodes theory and circuit complexity. Empirically, we perform synthetic experiments by training Transformers to simulate a wide variety of automata, and show that shortcut solutions can be learned via standard training. We further investigate the brittleness of these solutions and propose potential mitigations
    corecore