323 research outputs found

    Percentile Queries in Multi-Dimensional Markov Decision Processes

    Full text link
    Markov decision processes (MDPs) with multi-dimensional weights are useful to analyze systems with multiple objectives that may be conflicting and require the analysis of trade-offs. We study the complexity of percentile queries in such MDPs and give algorithms to synthesize strategies that enforce such constraints. Given a multi-dimensional weighted MDP and a quantitative payoff function ff, thresholds viv_i (one per dimension), and probability thresholds αi\alpha_i, we show how to compute a single strategy to enforce that for all dimensions ii, the probability of outcomes ρ\rho satisfying fi(ρ)vif_i(\rho) \geq v_i is at least αi\alpha_i. We consider classical quantitative payoffs from the literature (sup, inf, lim sup, lim inf, mean-payoff, truncated sum, discounted sum). Our work extends to the quantitative case the multi-objective model checking problem studied by Etessami et al. in unweighted MDPs.Comment: Extended version of CAV 2015 pape

    Fluidization of Petri nets to improve the analysis of Discrete Event Systems

    Get PDF
    Las Redes de Petri (RdP) son un formalismo ampliamente aceptado para el modelado y análisis de Sistemas de Eventos Discretos (SED). Por ejemplo sistemas de manufactura, de logística, de tráfico, redes informáticas, servicios web, redes de comunicación, procesos bioquímicos, etc. Como otros formalismos, las redes de Petri sufren del problema de la ¿explosión de estados¿, en el cual el número de estados crece explosivamente respecto de la carga del sistema, haciendo intratables algunas técnicas de análisis basadas en la enumeración de estados. La fluidificación de las redes de Petri trata de superar este problema, pasando de las RdP discretas (en las que los disparos de las transiciones y los marcados de los lugares son cantidades enteras no negativas) a las RdP continuas (en las que los disparos de las transiciones, y por lo tanto los marcados se definen en los reales). Las RdP continuas disponen de técnicas de análisis más eficientes que las discretas. Sin embargo, como toda relajación, la fluidificación supone el detrimento de la fidelidad, dando lugar a la pérdida de propiedades cualitativas o cuantitativas de la red de Petri original. El objetivo principal de esta tesis es mejorar el proceso de fluidificación de las RdP, obteniendo un formalismo continuo (o al menos parcialmente) que evite el problema de la explosión de estados, mientras aproxime adecuadamente la RdP discreta. Además, esta tesis considera no solo el proceso de fluidificación sino también el formalismo de las RdP continuas en sí mismo, estudiando la complejidad computacional de comprobar algunas propiedades. En primer lugar, se establecen las diferencias que aparecen entre las RdP discretas y continuas, y se proponen algunas transformaciones sobre la red discreta que mejorarán la red continua resultante. En segundo lugar, se examina el proceso de fluidificación de las RdP autónomas (i.e., sin ninguna interpretación temporal), y se establecen ciertas condiciones bajo las cuales la RdP continua preserva determinadas propiedades cualitativas de la RdP discreta: limitación, ausencia de bloqueos, vivacidad, etc. En tercer lugar, se contribuye al estudio de la decidibilidad y la complejidad computacional de algunas propiedades comunes de la RdP continua autónoma. En cuarto lugar, se considera el proceso de fluidificación de las RdP temporizadas. Se proponen algunas técnicas para preservar ciertas propiedades cuantitativas de las RdP discretas estocásticas por las RdP continuas temporizadas. Por último, se propone un nuevo formalismo, en el cual el disparo de las transiciones se adapta a la carga del sistema, combinando disparos discretos y continuos, dando lugar a las Redes de Petri híbridas adaptativas. Las RdP híbridas adaptativas suponen un marco conceptual para la fluidificación parcial o total de las Redes de Petri, que engloba a las redes de Petri discretas, continuas e híbridas. En general, permite preservar propiedades de la RdP original, evitando el problema de la explosión de estados

    The Knowledge Grid: A Platform to Increase the Interoperability of Computable Knowledge and Produce Advice for Health

    Full text link
    Here we demonstrate how more highly interoperable computable knowledge enables systems to generate large quantities of evidence-based advice for health. We first provide a thorough analysis of advice. Then, because advice derives from knowledge, we turn our focus to computable, i.e., machine-interpretable, forms for knowledge. We consider how computable knowledge plays dual roles as a resource conveying content and as an advice enabler. In this latter role, computable knowledge is combined with data about a decision situation to generate advice targeted at the pending decision. We distinguish between two types of automated services. When a computer system provides computable knowledge, we say that it provides a knowledge service. When computer system combines computable knowledge with instance data to provide advice that is specific to an unmade decision we say that it provides an advice-giving service. The work here aims to increase the interoperability of computable knowledge to bring about better knowledge services and advice-giving services for health. The primary motivation for this research is the problem of missing or inadequate advice about health topics. The global demand for well-informed health advice far exceeds the global supply. In part to overcome this scarcity, the design and development of Learning Health Systems is being pursued at various levels of scale: local, regional, state, national, and international. Learning Health Systems fuse capabilities to generate new computable biomedical knowledge with other capabilities to rapidly and widely use computable biomedical knowledge to inform health practices and behaviors with advice. To support Learning Health Systems, we believe that knowledge services and advice-giving services have to be more highly interoperable. I use examples of knowledge services and advice-giving services which exclusively support medication use. This is because I am a pharmacist and pharmacy is the biomedical domain that I know. The examples here address the serious problems of medication adherence and prescribing safety. Two empirical studies are shared that demonstrate the potential to address these problems and make improvements by using advice. But primarily we use these examples to demonstrate general and critical differences between stand-alone, unique approaches to handling computable biomedical knowledge, which make it useful for one system, and common, more highly interoperable approaches, which can make it useful for many heterogeneous systems. Three aspects of computable knowledge interoperability are addressed: modularity, identity, and updateability. We demonstrate that instances of computable knowledge, and related instances of knowledge services and advice-giving services, can be modularized. We also demonstrate the utility of uniquely identifying modular instances of computable knowledge. Finally, we build on the computing concept of pipelining to demonstrate how computable knowledge modules can automatically be updated and rapidly deployed. Our work is supported by a fledgling technical knowledge infrastructure platform called the Knowledge Grid. It includes formally specified compound digital objects called Knowledge Objects, a conventional digital Library that serves as a Knowledge Object repository, and an Activator that provides an application programming interface (API) for computable knowledge. The Library component provides knowledge services. The Activator component provides both knowledge services and advice-giving services. In conclusion, by increasing the interoperability of computable biomedical knowledge using the Knowledge Grid, we demonstrate new capabilities to generate well-informed health advice at a scale. These new capabilities may ultimately support Learning Health Systems and boost health for large populations of people who would otherwise not receive well-informed health advice.PHDInformationUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/146073/1/ajflynn_1.pd

    Robust Output Regulation: Optimization-Based Synthesis and Event-Triggered Implementation

    Get PDF
    We investigate the problem of practical output regulation: Design a controller that brings the system output in the vicinity of a desired target value while keeping the other variables bounded. We consider uncertain systems that are possibly nonlinear and the uncertainty of the linear part is modeled element-wise through a parametric family of matrix boxes. An optimization-based design procedures is proposed that delivers a continuous-time control and estimates the maximal regulation error. We also analyze an event-triggered emulation of this controller, which can be implemented on a digital platform, along with an explicit estimates of the regulation error

    Towards precision medicine in kidney transplantation: Epitope based HLA-matching and improved DSA diagnostics

    Get PDF
    Refinement of immunological pre-transplant risk assessment, specifically by improving the molecular compatibility between donor and recipient and by enhancing the characterization of the recipient’s pre-established immunological memory, is key to transit to an improved organ allocation and a personalized immunosuppression in solid organ transplantation (SOT). For both of these diagnostic clarifications, novel conceptual and technological achievements have been attained, but still need to be perfected. This doctoral thesis is dedicated to both topics. Publication 1 presents a study on the immunogenicity of human leukocyte antigen (HLA) epitopes, the results and insights of which contribute to better prediction of newly formed (de novo) immune responses in allograft recipients. The study presented in Publication 2 addresses the question of which pre-transplant donor specific HLA antibodies (DSA) are clinically relevant and why. The results demonstrate how different antibody-IgG compositions affect complement activation, the most detrimental antibody effector function in relation to antibody-mediated rejection (AMR). Finally, Publication 3 reviews relevant issues in the context of DSA characterization by the Single Antigen Bead (SAB) assay, which has become the central assay for HLA-antibody characterization in transplant diagnostics

    Multisensor Data Fusion in Pervasive Artificial Intelligence Systems

    Get PDF
    Intelligent systems designed to manage smart environments exploit numerous sensing and actuating devices, pervasively deployed so as to remain invisible to users and subtly learn their preferences and satisfy their needs. Nowadays, such systems are constantly evolving and becoming ever more complex, so it is increasingly difficult to develop them successfully. A possible solution to this problem might lie in delegating certain decisions to the machines themselves, making them more autonomous and able to self-configure and self-manage. This work presents a multi-tier architecture for a complete pervasive system capable of understanding the state of the surrounding environment, as well as using this knowledge to decide what actions should be performed to provide the best possible environmental conditions for end-users, in line with the Ambient Intelligence (AmI) paradigm. To achieve such high-level goals, the system has to effectively merge and analyze heterogeneous data collected by multiple sensors, pervasively deployed in a smart environment. To this end, the proposed system includes a context-aware, self-optimizing, adaptive module for sensor data fusion. Contextual information is leveraged in the fusion process, so as to increase the accuracy of inference and hence decision making in a dynamically changing environment. Additionally, two self-optimization modules are responsible for dynamically determining the subset of sensors to use, finding an optimal trade-off to minimize energy consumption and maximize sensing accuracy. The effectiveness of the proposed approach is demonstrated with the application scenario of user activity recognition in an AmI system managing a smart home environment. In order to increase the resilience of the system to highly uncertain and unreliable information, the architecture is enriched by a filtering module to pre-process raw data coming from lower levels, before feeding them to the data fusion and reasoning modules in the higher levels.Intelligent systems designed to manage smart environments exploit numerous sensing and actuating devices, pervasively deployed so as to remain invisible to users and subtly learn their preferences and satisfy their needs. Nowadays, such systems are constantly evolving and becoming ever more complex, so it is increasingly difficult to develop them successfully. A possible solution to this problem might lie in delegating certain decisions to the machines themselves, making them more autonomous and able to self-configure and self-manage. This work presents a multi-tier architecture for a complete pervasive system capable of understanding the state of the surrounding environment, as well as using this knowledge to decide what actions should be performed to provide the best possible environmental conditions for end-users, in line with the Ambient Intelligence (AmI) paradigm. To achieve such high-level goals, the system has to effectively merge and analyze heterogeneous data collected by multiple sensors, pervasively deployed in a smart environment. To this end, the proposed system includes a context-aware, self-optimizing, adaptive module for sensor data fusion. Contextual information is leveraged in the fusion process, so as to increase the accuracy of inference and hence decision making in a dynamically changing environment. Additionally, two self-optimization modules are responsible for dynamically determining the subset of sensors to use, finding an optimal trade-off to minimize energy consumption and maximize sensing accuracy. The effectiveness of the proposed approach is demonstrated with the application scenario of user activity recognition in an AmI system managing a smart home environment. In order to increase the resilience of the system to highly uncertain and unreliable information, the architecture is enriched by a filtering module to pre-process raw data coming from lower levels, before feeding them to the data fusion and reasoning modules in the higher levels
    corecore