1,784,284 research outputs found

    Tight WCRT Analysis for Synchronous C Programs

    Get PDF
    Accurate estimation of the tick length of a synchronous program is essential for efficient and predictable implementations that are devoid of timing faults. The techniques to determine the tick length statically are classified as worst case reaction time (WCRT) analysis. While a plethora of techniques exist for worst case execution time (WCET) analysis of procedural programs, there are only a handful of techniques for determining the WCRT value of synchronous programs. Most of these techniques produce overestimates and hence are unsuitable for the design of systems that are predictable while being also efficient. In this paper, we present an approach for the accurate estimation of the exact WCRT value of a synchronous program, called its tight WCRT value, using model checking. For our input specifications we have selected a synchronous C based language called PRET-C that is designed for programming Precision Timed (PRET) architectures. We then present an approach for static WCRT analysis of these programs via an intermediate format called TCCFG. This intermediate representation is then compiled to produce the input for the model checker. Experimental results that compare our approach to existing approaches demonstrate the benefits of the proposed approach. The proposed approach, while presented for PRET-C is also applicable for WCRT analysis of Esterel using simple adjustments to the generated model. The proposed approach thus paves the way for a generic approach for determining the tight WCRT value of synchronous programs at compile time

    Bad semidefinite programs: they all look the same

    Get PDF
    Conic linear programs, among them semidefinite programs, often behave pathologically: the optimal values of the primal and dual programs may differ, and may not be attained. We present a novel analysis of these pathological behaviors. We call a conic linear system Ax<=bAx <= b {\em badly behaved} if the value of supAx<=b\sup { | A x <= b } is finite but the dual program has no solution with the same value for {\em some} c.c. We describe simple and intuitive geometric characterizations of badly behaved conic linear systems. Our main motivation is the striking similarity of badly behaved semidefinite systems in the literature; we characterize such systems by certain {\em excluded matrices}, which are easy to spot in all published examples. We show how to transform semidefinite systems into a canonical form, which allows us to easily verify whether they are badly behaved. We prove several other structural results about badly behaved semidefinite systems; for example, we show that they are in NPcoNPNP \cap co-NP in the real number model of computing. As a byproduct, we prove that all linear maps that act on symmetric matrices can be brought into a canonical form; this canonical form allows us to easily check whether the image of the semidefinite cone under the given linear map is closed.Comment: For some reason, the intended changes between versions 4 and 5 did not take effect, so versions 4 and 5 are the same. So version 6 is the final version. The only difference between version 4 and version 6 is that 2 typos were fixed: in the last displayed formula on page 6, "7" was replaced by "1"; and in the 4th displayed formula on page 12 "A_1 - A_2 - A_3" was replaced by "A_3 - A_2 - A_1

    Value-Flow-Based Demand-Driven Pointer Analysis for C and C++

    Full text link
    IEEE We present SUPA, a value-flow-based demand-driven flow- and context-sensitive pointer analysis with strong updates for C and C++ programs. SUPA enables computing points-to information via value-flow refinement, in environments with small time and memory budgets. We formulate SUPA by solving a graph-reachability problem on an inter-procedural value-flow graph representing a program&#x0027;s def-use chains, which are pre-computed efficiently but over-approximately. To answer a client query (a request for a variable&#x0027;s points-to set), SUPA reasons about the flow of values along the pre-computed def-use chains sparsely (rather than across all program points), by performing only the work necessary for the query (rather than analyzing the whole program). In particular, strong updates are performed to filter out spurious def-use chains through value-flow refinement as long as the total budget is not exhausted

    Value Flow Graph Analysis with SATIrE

    Get PDF
    Partial redundancy elimination is a common program optimization that attempts to improve execution time by removing superfluous computations from a program. There are two well-known classes of such techniques: syntactic and semantic methods. While semantic optimization is more powerful, traditional algorithms based on SSA from are complicated, heuristic in nature, and unable to perform certain useful optimizations. The value flow graph is a syntactic program representation modeling semantic equivalences; it allows the combination of simple syntactic partial redundancy elimination with a powerful semantic analysis. This yields an optimization that is computationally optimal and simpler than traditional semantic methods. This talk discusses partial redundancy elimination using the value flow graph. A source-to-source optimizer for C++ was implemented using the SATIrE program analysis and transformation system. Two tools integrated in SATIrE were used in the implementation: ROSE is a framework for arbitrary analyses and source-to-source transformations of C++ programs, PAG is a tool for generating data flow analyzers from functional specifications

    Teaching Population Health: Innovations in the integration of the healthcare and public health systems

    Get PDF
    Population health is a critical concept in healthcare delivery today. Many healthcare administrators are struggling to adapt their organization from fee-for-service to value delivery. Payers and patients expect healthcare leaders to understand how to deliver care under this new model. Health administration programs play a critical role in training future leaders of healthcare organizations to be adaptable and effective in this dynamic environment. The purpose of this research was to: (a) engage current educators of health administration students in a dialogue about the best practices of integrating the healthcare and public health systems; (b) identify the content and pedagogy for population health in the undergraduate and graduate curricula; and (c) discuss exemplar population health curriculum models, available course materials, and curriculum integration options. Authors conducted focus groups of participants attending this educational session at the 2017 annual AUPHA meeting. Qualitative analysis of the focus group discussions was performed and themes identified by a consensus process. Study findings provide validated recommendations for population health in the health administration curriculum. The identification of key content areas and pedagogical approaches serves to inform health educators as they prepare future health administrators to practice in this new era of population health

    Pluggable abstract domains for analyzing embedded software

    Get PDF
    ManuscriptMany abstract value domains such as intervals, bitwise, constants, and value-sets have been developed to support dataflow analysis. Different domains offer alternative tradeoffs between analysis speed and precision. Furthermore, some domains are a better match for certain kinds of code than others. This paper presents the design and implementation of cXprop, an analysis and transformation tool for C that implements "conditional X propagation," a generalization of the well-known conditional constant propagation algorithm where X is an abstract value domain supplied by the user. cXprop is interprocedural, context-insensitive, and achieves reasonable precision on pointer-rich codes. We have applied cXprop to sensor network programs running on TinyOS, in order to reduce code size through interprocedural dead code elimination, and to find limited-bitwidth global variables. Our analysis of global variables is supported by a novel concurrency model for interruptdriven software. cXprop reduces TinyOS application code size by an average of 9.2% and predicts an average data size reduction of 8.2% through RAM compression

    Uncertainties of the CJK 5 Flavour LO Parton Distributions in the Real Photon

    Full text link
    Radiatively generated, LO quark (u,d,s,c,b) and gluon densities in the real, unpolarized photon, calculated in the CJK model being an improved realization of the CJKL approach, have been recently presented. The results were obtained through a global fit to the experimental F2^gamma data. In this paper we present, obtained for the very first time in the photon case, an estimate of the uncertainties of the CJK parton distributions due to the experimental errors. The analysis is based on the Hessian method which was recently applied in the proton parton structure analysis. Sets of test parametrizations are given for the CJK model. They allow for calculation of its best fit parton distributions along with F2^gamma and for computation of uncertainties of any physical value depending on the real photon parton densities. We test the applicability of the approach by comparing uncertainties of example cross-sections calculated in the Hessian and Lagrange methods. Moreover, we present a detailed analysis of the chi^2 of the CJK fit and its relation to the data. We show that large chi^2/DOF of the fit is due to only a few of the experimental measurements. By excluding them chi^2/DOF approx 1 can be obtained.Comment: 28 pages, 8 eps figures, 2 Latex figures; FORTRAN programs available at http://www.fuw.edu.pl/~pjank/param.html; table 10, figure 10 and section 6 correcte

    Evaluation of mathematical indices as tools for distinguishing β-thalassemia trait from iron deficiency anemia in Portuguese females with microcytic anemia

    Get PDF
    Microcytic anemia is a common condition frequently caused by iron deficiency anemia (IDA) or β-thalassemia trait (BTT). Some mathematical indices have been described as fast and inexpensive tools for distinguishing these two conditions. This approach is very useful in mass screening programs especially in countries with limited resources. This study aimed to evaluate the diagnostic performance of 13 distinct indices: RBC, England&Fraser, Mentzer, Srivastava, Shine&Lal, RDW, Ricerca, Jayabose (RDWI), Green&King (G&K), MDHL, MCHD, Sirdah and Ensani. We investigated 102 adult Portuguese female, presenting anemia (HbA; c.92+6T>C; c.92+110G>A or c.1188C>T) and 51 IDA, being assured that no individual had simultaneously the two conditions. To determine the performance of the indices, sensitivity, specificity, Youden index (YI) and receiver operating characteristic (ROC) curves were calculated. Due to the high values of AUC (Area Under the Curve) from ROC analysis, a cutoff of 0.70 for the YI was established in order to determine the best formulas. We find that the 3 best performing indices to differentiate the 2 groups were RBC (YI=0.71; AUC=0.902), RDWI (YI=0.84; AUC=0.973) and G&K (YI=0.82; AUC=0.972). Our results suggest a similarity with other Mediterranean countries such as Spain and Greece, where G&K and RDWI also performed above our set cutoff. The same is observed in Brazil probably due to its Portuguese ancestry. We conclude that aiming to diagnosis the condition underlying a microcytic anemia in a female population, there is value in using this method to recognize the individuals suspected of BTT and forward them for HbA2 measurement or HBB molecular test. In the future, a robust group of male patients should be added to the analysis in order to extrapolate which of these indices would best apply to the whole adult Portuguese population.This work was partially funded by INSA_2012DGH720 and ISAMBinfo:eu-repo/semantics/publishedVersio
    corecore