3,125 research outputs found

    Decision analysis under uncertainity for sustainable development

    Get PDF
    Aplicat embargament des de la data de defensa fins el 31 de desembre de 2019Policy-making for sustainable development becomes more efficient when it is reliably backed by evidence-based decision analysis. Concretely, this is crucial in the planning of public services delivery. By translating "raw" data into information, decision analysis illuminates our judgment, and ultimately the policies we adopt. In the context of public services provision, decision analysis can support the prioritization of policy options and the monitoring of progress. However, most models are deterministic - that is, they do not consider the uncertainty in their evidence. These "incomplete" models, through their impact in policy decisions, can ultimately lead to an inefficient use of resources. The main barriers to a wider incorporation of uncertainty are: (i) the complexity of the approaches currently available, and (ii) the need to develop methods tailored to the specific decision problems faced in public services delivery. To overcome these limitations, this thesis intends to facilitate the incorporation of uncertainty in the evidence into decision analysis for sustainable development. We propose two methods. First, a non-compensatory multi-criteria prioritization under uncertainty model. Given multiple criteria and uncertain evidence, the model identifies the best policy option to improve service provision for sustainable development. The non-compensatory nature of our model makes it an attractive alternative to the widely used composite index approach. Second, a compositional trend analysis under uncertainty model to monitor service coverage. By considering the non-negativity and constant-sum constraints of the data, our model provides better estimates for measuring progress than standard statistical approaches. These two methods are validated in real case studies in the energy, water and health sectors. We apply our prioritization model to the context of strategic renewable energy planning, and the targeting of water, sanitation and hygiene services. Furthermore, we use our trend analysis model to the global monitoring of water and sanitation and child mortality. Our results emphasize the importance of considering and incorporating uncertainty in the evidence into decision analysis, particularly into prioritization and monitoring processes, both central to sustainable development practice.La formulación de políticas para el desarrollo sostenible es más eficiente cuando está respaldada por un análisis de decisiones basado en evidencia. Esto es especialmente crucial en la planificación de la prestación de servicios públicos. Al transformar los datos "brutos" en información, el análisis de decisiones ilumina nuestro juicio y, en última instancia, las políticas que adoptamos. En el contexto de la provisión de servicios públicos, el análisis de decisiones puede apoyar la priorización de las políticas públicas, así como el monitoreo del progreso. Sin embargo, la mayoría de los modelos son deterministas, es decir, no consideran la incertidumbre presente en la evidencia. Estos modelos "incompletos" pueden, a través de su impacto en las decisiones políticas, conducir a un uso ineficiente de los recursos. Las principales barreras para una incorporación más amplia de la incertidumbre son: (i) la complejidad de los enfoques actualmente disponibles, y (ii) la necesidad de desarrollar métodos adaptados a los problemas de decisión específicos a la planificación de los servicios públicos. Para superar estas limitaciones, esta tesis pretende facilitar la incorporación de la incertidumbre presente en la evidencia en el análisis de decisiones para el desarrollo sostenible. Proponemos dos métodos. Primero, un modelo de priorización multicriterio no compensatorio bajo incertidumbre. Dados múltiples criterios y evidencias con incertidumbre, el modelo identifica la mejor política para mejorar la provisión de servicios para el desarrollo sostenible. La naturaleza no compensatoria de nuestro modelo lo convierte en una alternativa atractiva al enfoque de índices compuestos ampliamente utilizado. Segundo, un modelo de análisis de tendencias composicionales bajo incertidumbre para monitorear la cobertura de los servicios. Al considerar las restricciones de no negatividad y de suma constante de los datos, nuestro modelo proporciona mejores estimadores para medir el progreso que los enfoques estadísticos estándar. Estos dos métodos se validan en casos de estudio reales en los sectores de energía, agua y salud. Aplicamos nuestro modelo de priorización al contexto de la planificación estratégica de energías renovables y de los servicios de agua, saneamiento e higiene. Además, utilizamos nuestro modelo de análisis de tendencias para el monitoreo global del accesso a agua y saneamiento, así como de la reducción de la mortalidad infantil. Nuestros resultados enfatizan la importancia de considerar e incorporar la incertidumbre de la evidencia en el análisis de decisiones, particularmente en los procesos de priorización y monitoreo, ambos centrales para la práctica del desarrollo sostenible.Postprint (published version

    On the theory of composition in physics

    Full text link
    We develop a theory for describing composite objects in physics. These can be static objects, such as tables, or things that happen in spacetime (such as a region of spacetime with fields on it regarded as being composed of smaller such regions joined together). We propose certain fundamental axioms which, it seems, should be satisfied in any theory of composition. A key axiom is the order independence axiom which says we can describe the composition of a composite object in any order. Then we provide a notation for describing composite objects that naturally leads to these axioms being satisfied. In any given physical context we are interested in the value of certain properties for the objects (such as whether the object is possible, what probability it has, how wide it is, and so on). We associate a generalized state with an object. This can be used to calculate the value of those properties we are interested in for for this object. We then propose a certain principle, the composition principle, which says that we can determine the generalized state of a composite object from the generalized states for the components by means of a calculation having the same structure as the description of the generalized state. The composition principle provides a link between description and prediction.Comment: 23 pages. To appear in a festschrift for Samson Abramsky edited by Bob Coecke, Luke Ong, and Prakash Panangade

    ISIPTA'07: Proceedings of the Fifth International Symposium on Imprecise Probability: Theories and Applications

    Get PDF
    B

    Artificial intelligence in steam cracking modeling : a deep learning algorithm for detailed effluent prediction

    Get PDF
    Chemical processes can benefit tremendously from fast and accurate effluent composition prediction for plant design, control, and optimization. The Industry 4.0 revolution claims that by introducing machine learning into these fields, substantial economic and environmental gains can be achieved. The bottleneck for high-frequency optimization and process control is often the time necessary to perform the required detailed analyses of, for example, feed and product. To resolve these issues, a framework of four deep learning artificial neural networks (DL ANNs) has been developed for the largest chemicals production process-steam cracking. The proposed methodology allows both a detailed characterization of a naphtha feedstock and a detailed composition of the steam cracker effluent to be determined, based on a limited number of commercial naphtha indices and rapidly accessible process characteristics. The detailed characterization of a naphtha is predicted from three points on the boiling curve and paraffins, iso-paraffins, olefins, naphthenes, and aronatics (PIONA) characterization. If unavailable, the boiling points are also estimated. Even with estimated boiling points, the developed DL ANN outperforms several established methods such as maximization of Shannon entropy and traditional ANNs. For feedstock reconstruction, a mean absolute error (MAE) of 0.3 wt% is achieved on the test set, while the MAE of the effluent prediction is 0.1 wt%. When combining all networks-using the output of the previous as input to the next-the effluent MAE increases to 0.19 wt%. In addition to the high accuracy of the networks, a major benefit is the negligible computational cost required to obtain the predictions. On a standard Intel i7 processor, predictions are made in the order of milliseconds. Commercial software such as COILSIM1D performs slightly better in terms of accuracy, but the required central processing unit time per reaction is in the order of seconds. This tremendous speed-up and minimal accuracy loss make the presented framework highly suitable for the continuous monitoring of difficult-to-access process parameters and for the envisioned, high-frequency real-time optimization (RTO) strategy or process control. Nevertheless, the lack of a fundamental basis implies that fundamental understanding is almost completely lost, which is not always well-accepted by the engineering community. In addition, the performance of the developed networks drops significantly for naphthas that are highly dissimilar to those in the training set. (C) 2019 THE AUTHORS. Published by Elsevier LTD on behalf of Chinese Academy of Engineering and Higher Education Press Limited Company

    The Interval Domain: A Matchmaker for aCTL and aPCTL

    Get PDF
    AbstractWe present aPCTL, a version of PCTL with an action-based semantics which coincides with the ordinary PCTL in case of a sole action type. We point out what aspects of aPCTL may be improved for its application as a probabilistic logic in a tool modeling large probabilistic system. We give a non-standard semantics to the action-based temporal logical aCTL, where the propositional clauses are interpreted in a fuzzy and the modalities in a probabilistic way; the until-construct is evaluated as a least fixed-point over these meanings. We view aCTL formulas ⊘ as templates for aPCTL formulas (which still need vectors of thresholds as annotations for all subformulas which are path formulas). Since [⊘]s, our non-standard meaning of ø at state s, is an interval [a, b], we may craft aPCTL formulas ø from using the information a and b respectively. This results in two aPCTL formulas ø and ø1. This translation defines a critical region of such thresholds for ⊘ in the following sense: if a > 0 then a satisfies the aPCTL formula ø1 dually, if b < 1 then s does not satisfy the formula ø1. Thus, any interesting probabilistic dynamics of aPCTL formulas with “pattern” ⊘ has to happen within the n-dimensional interval determined by out non-standard aCTL semantics [⊘].we would like to thank Martín Hötzel Escardó for suggesting to look at the interval domain at the LICS'97 meeting in Warsaw. He also pointed to work in his PhD thesis about the universality of I. we also acknowledge Marta Kwaitkowska, Christel Baier, Rance Cleaveland, and Scott Smolka for fruitful discussion on this subject matter

    Approximation in quantale-enriched categories

    Full text link
    Our work is a fundamental study of the notion of approximation in V-categories and in (U,V)-categories, for a quantale V and the ultrafilter monad U. We introduce auxiliary, approximating and Scott-continuous distributors, the way-below distributor, and continuity of V- and (U,V)-categories. We fully characterize continuous V-categories (resp. (U,V)-categories) among all cocomplete V-categories (resp. (U,V)-categories) in the same ways as continuous domains are characterized among all dcpos. By varying the choice of the quantale V and the notion of ideals, and by further allowing the ultrafilter monad to act on the quantale, we obtain a flexible theory of continuity that applies to partial orders and to metric and topological spaces. We demonstrate on examples that our theory unifies some major approaches to quantitative domain theory.Comment: 17 page
    corecore