686,509 research outputs found

    Value Flow Graph Analysis with SATIrE

    Get PDF
    Partial redundancy elimination is a common program optimization that attempts to improve execution time by removing superfluous computations from a program. There are two well-known classes of such techniques: syntactic and semantic methods. While semantic optimization is more powerful, traditional algorithms based on SSA from are complicated, heuristic in nature, and unable to perform certain useful optimizations. The value flow graph is a syntactic program representation modeling semantic equivalences; it allows the combination of simple syntactic partial redundancy elimination with a powerful semantic analysis. This yields an optimization that is computationally optimal and simpler than traditional semantic methods. This talk discusses partial redundancy elimination using the value flow graph. A source-to-source optimizer for C++ was implemented using the SATIrE program analysis and transformation system. Two tools integrated in SATIrE were used in the implementation: ROSE is a framework for arbitrary analyses and source-to-source transformations of C++ programs, PAG is a tool for generating data flow analyzers from functional specifications

    One-dimensional modelling of mixing, dispersion and segregation of multiphase fluids flowing in pipelines

    Get PDF
    The flow of immiscible liquids in pipelines has been studied in this work in order to formulate a one-dimensional model for the computer analysis of two-phase liquid-liquid flow in horizontal pipes. The model simplifies the number of flow patterns commonly encountered in liquid-liquid flow to stratified flow, fully dispersed flow and partial dispersion with the formation of one or two different emulsions. The model is based on the solution of continuity equations for dispersed and continuous phase; correlations available in the literature are used for the calculation of the maximum and mean dispersed phase drop diameter, the emulsion viscosity, the phase inversion point, the liquid-wall friction factors, liquid-liquid friction factors at interface and the slip velocity between the phases. In absence of validated models for entrainment and deposition in liquid-liquid flow, two entrainment rate correlations and two deposition models originally developed for gas-liquid flow have been adapted to liquid-liquid flow. The model was applied to the flow of oil and water; the predicted flow regimes have been presented as a function of the input water fraction and mixture velocity and compared with experimental results, showing an overall good agreement between calculation and experiments. Calculated values of oil-in-water and water-in-oil dispersed fractions were compared against experimental data for different oil and water superficial velocities, input water fractions and mixture velocities. Pressure losses calculated in the full developed flow region of the pipe, a crucial quantity in industrial applications, are reasonably close to measured values. Discrepancies and possible improvements of the model are also discussed. The model for two-phase flow was extended to three-phase liquid-liquid-gas flow within the framework of the two-fluid model. The two liquid phases were treated as a unique liquid phase with properly averaged properties. The model for three-phase flow thus developed was implemented in an existing research code for the simulation of three-phase slug flow with the formation of emulsions in the liquid phase and phase inversion phenomena. Comparisons with experimental data are presented

    A Formal Framework for Precise Parametric WCET Formulas

    Get PDF
    Parametric worst-case execution time (WCET) formulas are a valuable tool to estimate the impact of input data properties on the WCET at design time, or to guide scheduling decisions at runtime. Previous approaches to parametric WCET analysis either provide only informal ad-hoc solutions or tend to be rather pessimistic, as they do not take flow constraints other than simple loop bounds into account. We develop a formal framework around path- and frequency expressions, which allow us to reason about execution frequencies of program parts. Starting from a reducible control flow graph and a set of (parametric) constraints, we show how to obtain frequency expressions and refine them by means of sound approximations, which account for more sophisticated flow constraints. Finally, we obtain closed-form parametric WCET formulas by means of partial evaluation. We developed a prototype, implementing our solution to parametric WCET analysis, and compared existing approaches within our setting. As our framework supports fine-grained transformations to improve the precision of parametric formulas, it allows to focus on important flow relations in order to avoid intractably large formulas

    IST Austria Thesis

    Get PDF
    This dissertation focuses on algorithmic aspects of program verification, and presents modeling and complexity advances on several problems related to the static analysis of programs, the stateless model checking of concurrent programs, and the competitive analysis of real-time scheduling algorithms. Our contributions can be broadly grouped into five categories. Our first contribution is a set of new algorithms and data structures for the quantitative and data-flow analysis of programs, based on the graph-theoretic notion of treewidth. It has been observed that the control-flow graphs of typical programs have special structure, and are characterized as graphs of small treewidth. We utilize this structural property to provide faster algorithms for the quantitative and data-flow analysis of recursive and concurrent programs. In most cases we make an algebraic treatment of the considered problem, where several interesting analyses, such as the reachability, shortest path, and certain kind of data-flow analysis problems follow as special cases. We exploit the constant-treewidth property to obtain algorithmic improvements for on-demand versions of the problems, and provide data structures with various tradeoffs between the resources spent in the preprocessing and querying phase. We also improve on the algorithmic complexity of quantitative problems outside the algebraic path framework, namely of the minimum mean-payoff, minimum ratio, and minimum initial credit for energy problems. Our second contribution is a set of algorithms for Dyck reachability with applications to data-dependence analysis and alias analysis. In particular, we develop an optimal algorithm for Dyck reachability on bidirected graphs, which are ubiquitous in context-insensitive, field-sensitive points-to analysis. Additionally, we develop an efficient algorithm for context-sensitive data-dependence analysis via Dyck reachability, where the task is to obtain analysis summaries of library code in the presence of callbacks. Our algorithm preprocesses libraries in almost linear time, after which the contribution of the library in the complexity of the client analysis is (i)~linear in the number of call sites and (ii)~only logarithmic in the size of the whole library, as opposed to linear in the size of the whole library. Finally, we prove that Dyck reachability is Boolean Matrix Multiplication-hard in general, and the hardness also holds for graphs of constant treewidth. This hardness result strongly indicates that there exist no combinatorial algorithms for Dyck reachability with truly subcubic complexity. Our third contribution is the formalization and algorithmic treatment of the Quantitative Interprocedural Analysis framework. In this framework, the transitions of a recursive program are annotated as good, bad or neutral, and receive a weight which measures the magnitude of their respective effect. The Quantitative Interprocedural Analysis problem asks to determine whether there exists an infinite run of the program where the long-run ratio of the bad weights over the good weights is above a given threshold. We illustrate how several quantitative problems related to static analysis of recursive programs can be instantiated in this framework, and present some case studies to this direction. Our fourth contribution is a new dynamic partial-order reduction for the stateless model checking of concurrent programs. Traditional approaches rely on the standard Mazurkiewicz equivalence between traces, by means of partitioning the trace space into equivalence classes, and attempting to explore a few representatives from each class. We present a new dynamic partial-order reduction method called the Data-centric Partial Order Reduction (DC-DPOR). Our algorithm is based on a new equivalence between traces, called the observation equivalence. DC-DPOR explores a coarser partitioning of the trace space than any exploration method based on the standard Mazurkiewicz equivalence. Depending on the program, the new partitioning can be even exponentially coarser. Additionally, DC-DPOR spends only polynomial time in each explored class. Our fifth contribution is the use of automata and game-theoretic verification techniques in the competitive analysis and synthesis of real-time scheduling algorithms for firm-deadline tasks. On the analysis side, we leverage automata on infinite words to compute the competitive ratio of real-time schedulers subject to various environmental constraints. On the synthesis side, we introduce a new instance of two-player mean-payoff partial-information games, and show how the synthesis of an optimal real-time scheduler can be reduced to computing winning strategies in this new type of games

    A blind deconvolution approach to recover effective connectivity brain networks from resting state fMRI data

    Full text link
    A great improvement to the insight on brain function that we can get from fMRI data can come from effective connectivity analysis, in which the flow of information between even remote brain regions is inferred by the parameters of a predictive dynamical model. As opposed to biologically inspired models, some techniques as Granger causality (GC) are purely data-driven and rely on statistical prediction and temporal precedence. While powerful and widely applicable, this approach could suffer from two main limitations when applied to BOLD fMRI data: confounding effect of hemodynamic response function (HRF) and conditioning to a large number of variables in presence of short time series. For task-related fMRI, neural population dynamics can be captured by modeling signal dynamics with explicit exogenous inputs; for resting-state fMRI on the other hand, the absence of explicit inputs makes this task more difficult, unless relying on some specific prior physiological hypothesis. In order to overcome these issues and to allow a more general approach, here we present a simple and novel blind-deconvolution technique for BOLD-fMRI signal. Coming to the second limitation, a fully multivariate conditioning with short and noisy data leads to computational problems due to overfitting. Furthermore, conceptual issues arise in presence of redundancy. We thus apply partial conditioning to a limited subset of variables in the framework of information theory, as recently proposed. Mixing these two improvements we compare the differences between BOLD and deconvolved BOLD level effective networks and draw some conclusions

    A Light-Weight Approach for Verifying Multi-Threaded Programs with CPAchecker

    Get PDF
    Verifying multi-threaded programs is becoming more and more important, because of the strong trend to increase the number of processing units per CPU socket. We introduce a new configurable program analysis for verifying multi-threaded programs with a bounded number of threads. We present a simple and yet efficient implementation as component of the existing program-verification framework CPACHECKER. While CPACHECKER is already competitive on a large benchmark set of sequential verification tasks, our extension enhances the overall applicability of the framework. Our implementation of handling multiple threads is orthogonal to the abstract domain of the data-flow analysis, and thus, can be combined with several existing analyses in CPACHECKER, like value analysis, interval analysis, and BDD analysis. The new analysis is modular and can be used, for example, to verify reachability properties as well as to detect deadlocks in the program. This paper includes an evaluation of the benefit of some optimization steps (e.g., changing the iteration order of the reachability algorithm or applying partial-order reduction) as well as the comparison with other state-of-the-art tools for verifying multi-threaded programs

    Additional insights to EC7 from the application of reliability-based design methods: the case of debris flow protection structures

    Get PDF
    Debris flows are dangerous natural processes that cause extensive damages to infrastructures and urbanized areas and can lead to loss of human lives. Their unpredictability, their extremely high motion and their magnitude are the main causes of these harms. Mitigation measures are fundamental for reducing the associated risk and protecting infrastructures in mountainous areas. Their design is still an open issue: there are many formulations to evaluating impact pressure. Moreover, the uncertainties in the determination of flow characteristics (velocity and thickness) are significantly high and difficult to quantify. In the European Union, the design of any type of structures involved in rock mechanics field must comply with EN-1997 Geotechnical Design (CEN 2004) (EC7). For debris flow countermeasures, EC7 requirements are very difficult to apply in practice since partial safety factors are not provided for these phenomena. However, the basic philosophy of reliability-based design (RBD), as defined in EN1990 (CEN 2002) may be a suitable and complementary approach to provide geotechnical structures with a uniform probability of failure. Reliability Based Design (RBD) can provide additional insights to EC7 design and can be applied when partial factors have still to be proposed (by EC7) to cover uncertainties of less common parameters, as in case of debris flow countermeasures. This paper presents an analysis of the advantages and limitations on the applicability of RBD approach to debris flow countermeasures, by using the first-order reliability method (FORM). In particular, data availability, the possibilities for analysing data in a statistical framework and the choice of performance function are the main limitation of the method, which force to make assumptions regarding statistical distribution of the considered parameters. A sensitivity analyses, comparing different equations, commonly used for debris flow impact pressure estimation, were performed for quantifying the effect of the selected performance function on the RBD results

    An Abstract Interpretation-Based Framework for Control Flow Reconstruction from Binaries

    Get PDF
    Due to indirect branch instructions, analyses on executables commonly suffer from the problem that a complete control flow graph of the program is not available. Data flow analysis has been proposed before to statically determine branch targets in many cases, yet a generic strategy without assumptions on compiler idioms or debug information is lacking. We have devised an abstract interpretation-based framework for generic low level programs with indirect jumps which safely combines a pluggable abstract domain with the notion of partial control flow graphs. Using our framework, we are able to show that the control flow reconstruction algorithm of our disassembly tool Jakstab produces the most precise overapproximation of the control flow graph with respect to the used abstract domain

    A Framework to improve Turbulence Models using Full-field Inversion and Machine Learning

    Full text link
    Accurate prediction of turbulent flows remains a barrier to the widespread use of computational fluid dynamics in analysis and design. Since practical wall-bounded turbulent flows involve a very wide range of length and time scales, it is intractable to resolve all relevant scales, due to limitations in computational power. The usual tools for predictions, in order of their accuracy, includes direct numerical simulation (DNS), large-eddy simulation (LES), and Reynolds-averaged Navier-Stokes (RANS) based models. DNS and LES will continue to be prohibitively expensive for analysis of high Reynolds number wall-bounded flows for at least two more decades and for much longer for design applications. At the same time, the high-quality data generated by such simulations provides detailed information about turbulence physics in affordable problems. Experimental measurements have the potential to offer limited data in more practical regimes. However, data from simulations and experiments are mostly used for validation, but not directly in model improvement. This thesis presents a generalized framework of data-augmented modeling, which we refer to as field-inversion and machine-learning (FIML). FIML is utilized to develop augmentations to RANS-based models using data from DNS, LES or experiments. This framework involves the solution of multiple inverse problems to infer spatial discrepancies in a baseline turbulence model by minimizing the misfit between data and predictions. Solving the inverse problem to infer the spatial discrepancy field allows the use of a wide variety and fidelity of data. Inferring the field discrepancy using this approach connects the data and the turbulence model in a manner consistent with the underlying assumptions in the baseline model. Several such discrepancy fields are used as inputs to a machine learning procedure, which in turn reconstructs corrective functional forms in terms of local flow quantities. The machine-learned discrepancy is then embedded within existing turbulence closures, resulting in a partial differential equation/machine learning hybrid, and utilized for prediction. The FIML framework is applied to augment the Spalart-Allmaras (SA) and the Wilcox's KOM model and for flows involving curvature, adverse pressure gradients, and separation. The value of the framework is demonstrated by augmenting the SA model for massively separated flows over airfoil using lift data for just one airfoil. The augmented SA model is able to accurately predict the surface pressure, the point of separation and the maximum lift -- even for Reynolds numbers and airfoil shapes not used for training the model. The portability of the augmented model is demonstrated by utilizing in-house finite-volume flow solver with FIML to develop augmentations and embedding them in a commercial finite-element solver. The implication is that the ML-augmented model can thus be used in a fashion that is similar to present-day turbulence model. While the results presented in this thesis are limited to turbulence modeling, the FIML framework represents a general physics-constrained data-driven paradigm that can be applied to augment models governed by partial differential equations.PHDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144034/1/anandps_1.pd
    • …
    corecore