3,606 research outputs found

    Bounding Worst-Case Data Cache Behavior by Analytically Deriving Cache Reference Patterns

    Get PDF
    While caches have become invaluable for higher-end architectures due to their ability to hide, in part, the gap between processor speed and memory access times, caches (and particularly data caches) limit the timing predictability for data accesses that may reside in memory or in cache. This is a significant problem for real-time systems. The objective our work is to provide accurate predictions of data cache behavior of scalar and non-scalar references whose reference patterns are known at compile time. Such knowledge about cache behavior provides the basis for significant improvements in bounding the worst-case execution time (WCET) of real-time programs, particularly for hard-to-analyze data caches. We exploit the power of the Cache Miss Equations (CME) framework but lift a number of limitations of traditional CME to generalize the analysis to more arbitrary programs. We further devised a transformation, coined “forced” loop fusion, which facilitates the analysis across sequential loops. Our contributions result in exact data cache reference patterns — in contrast to approximate cache miss behavior of prior work. Experimental results indicate improvements on the accuracy of worst-case data cache behavior up to two orders of magnitude over the original approach. In fact, our results closely bound and sometimes even exactly match those obtained by trace-driven simulation for worst-case inputs. The resulting WCET bounds of timing analysis confirm these findings in terms of providing tight bounds. Overall, our contributions lift analytical approaches to predict data cache behavior to a level suitable for efficient static timing analysis and, subsequently, real-time schedulability of tasks with predictable WCET

    Lectures on the functional renormalization group method

    Full text link
    These introductory notes are about functional renormalization group equations and some of their applications. It is emphasised that the applicability of this method extends well beyond critical systems, it actually provides us a general purpose algorithm to solve strongly coupled quantum field theories. The renormalization group equation of F. Wegner and A. Houghton is shown to resum the loop-expansion. Another version, due to J. Polchinski, is obtained by the method of collective coordinates and can be used for the resummation of the perturbation series. The genuinely non-perturbative evolution equation is obtained in a manner reminiscent of the Schwinger-Dyson equations. Two variants of this scheme are presented where the scale which determines the order of the successive elimination of the modes is extracted from external and internal spaces. The renormalization of composite operators is discussed briefly as an alternative way to arrive at the renormalization group equation. The scaling laws and fixed points are considered from local and global points of view. Instability induced renormalization and new scaling laws are shown to occur in the symmetry broken phase of the scalar theory. The flattening of the effective potential of a compact variable is demonstrated in case of the sine-Gordon model. Finally, a manifestly gauge invariant evolution equation is given for QED.Comment: 47 pages, 11 figures, final versio

    Automatic Safe Data Reuse Detection for the WCET Analysis of Systems With Data Caches

    Get PDF
    Worst-case execution time (WCET) analysis of systems with data caches is one of the key challenges in real-time systems. Caches exploit the inherent reuse properties of programs, temporarily storing certain memory contents near the processor, in order that further accesses to such contents do not require costly memory transfers. Current worst-case data cache analysis methods focus on specific cache organizations (LRU, locked, ACDC, etc.). In this article, we analyze data reuse (in the worst case) as a property of the program, and thus independent of the data cache. Our analysis method uses Abstract Interpretation on the compiled program to extract, for each static load/store instruction, a linear expression for the address pattern of its data accesses, according to the Loop Nest Data Reuse Theory. Each data access expression is compared to that of prior (dominant) memory instructions to verify whether it presents a guaranteed reuse. Our proposal manages references to scalars, arrays, and non-linear accesses, provides both temporal and spatial reuse information, and does not require the exploration of explicit data access sequences. As a proof of concept we analyze the TACLeBench benchmark suite, showing that most loads/stores present data reuse, and how compiler optimizations affect it. Using a simple hit/miss estimation on our reuse results, the time devoted to data accesses in the worst case is reduced to 27% compared to an always-miss system, equivalent to a data hit ratio of 81%. With compiler optimization, such time is reduced to 6.5%

    Next-to-eikonal corrections to soft gluon radiation: a diagrammatic approach

    Get PDF
    We consider the problem of soft gluon resummation for gauge theory amplitudes and cross sections, at next-to-eikonal order, using a Feynman diagram approach. At the amplitude level, we prove exponentiation for the set of factorizable contributions, and construct effective Feynman rules which can be used to compute next-to-eikonal emissions directly in the logarithm of the amplitude, finding agreement with earlier results obtained using path-integral methods. For cross sections, we also consider sub-eikonal corrections to the phase space for multiple soft-gluon emissions, which contribute to next-to-eikonal logarithms. To clarify the discussion, we examine a class of log(1 - x) terms in the Drell-Yan cross-section up to two loops. Our results are the first steps towards a systematic generalization of threshold resummations to next-to-leading power in the threshold expansion.Comment: 66 pages, 19 figure

    Two Notions of Naturalness

    Get PDF
    My aim in this paper is twofold: (i) to distinguish two notions of naturalness employed in BSM physics and (ii) to argue that recognizing this distinction has methodological consequences. One notion of naturalness is an "autonomy of scales" requirement: it prohibits sensitive dependence of an effective field theory's low-energy observables on precise specification of the theory's description of cutoff-scale physics. I will argue that considerations from the general structure of effective field theory provide justification for the role this notion of naturalness has played in BSM model construction. A second, distinct notion construes naturalness as a statistical principle requiring that the values of the parameters in an effective field theory be "likely" given some appropriately chosen measure on some appropriately circumscribed space of models. I argue that these two notions are historically and conceptually related but are motivated by distinct theoretical considerations and admit of distinct kinds of solution.Comment: 34 pages, 1 figur

    An integrated approach to high integrity software verification.

    Get PDF
    Computer software is developed through software engineering. At its most precise, software engineering involves mathematical rigour as formal methods. High integrity software is associated with safety critical and security critical applications, where failure would bring significant costs. The development of high integrity software is subject to stringent standards, prescribing best practises to increase quality. Typically, these standards will strongly encourage or enforce the application of formal methods. The application of formal methods can entail a significant amount of mathematical reasoning. Thus, the development of automated techniques is an active area of research. The trend is to deliver increased automation through two complementary approaches. Firstly, lightweight formal methods are adopted, sacrificing expressive power, breadth of coverage, or both in favour of tractability. Secondly, integrated solutions are sought, exploiting the strengths of different technologies to increase automation. The objective of this thesis is to support the production of high integrity software by automating an aspect of formal methods. To develop tractable techniques we focus on the niche activity of verifying exception freedom. To increase effectiveness, we integrate the complementary technologies of proof planning and program analysis. Our approach is investigated by enhancing the SPARK Approach, as developed by Altran Praxis Limited. Our approach is implemented and evaluated as the SPADEase system. The key contributions of the thesis are summarised below: • Configurable and Sound - Present a configurable and justifiably sound approach to software verification. • Cooperative Integration - Demonstrate that more targeted and effective automation can be achieved through the cooperative integration of distinct technologies. • Proof Discovery - Present proof plans that support the verification of exception freedom. • Invariant Discovery - Present invariant discovery heuristics that support the verification of exception freedom. • Implementation as SPADEase - Implement our approach as SPADEase. • Industrial Evaluation - Evaluate SPADEase against both textbook and industrial subprograms

    Statistical mechanics of permanent random atomic and molecular networks: Structure and heterogeneity of the amorphous solid state

    Full text link
    Under sufficient permanent random covalent bonding, a fluid of atoms or small molecules is transformed into an amorphous solid network. Being amorphous, local structural properties in such networks vary across the sample. A natural order parameter, resulting from a statistical-mechanical approach, captures information concerning this heterogeneity via a certain joint probability distribution. This joint probability distribution describes the variations in the positional and orientational localization of the particles, reflecting the random environments experienced by them, as well as further information characterizing the thermal motion of particles. A complete solution, valid in the vicinity of the amorphous solidification transition, is constructed essentially analytically for the amorphous solid order parameter, in the context of the random network model and approach introduced by Goldbart and Zippelius [Europhys. Lett. 27, 599 (1994)]. Knowledge of this order parameter allows us to draw certain conclusions about the stucture and heterogeneity of randomly covalently bonded atomic or molecular network solids in the vicinity of the amorphous solidification transition. Inter alia, the positional aspects of particle localization are established to have precisely the structure obtained perviously in the context of vulcanized media, and results are found for the analogue of the spin glass order parameter describing the orientational freezing of the bonds between particles.Comment: 31 pages, 5 figure

    Matrix Graph Grammars

    Full text link
    This book objective is to develop an algebraization of graph grammars. Equivalently, we study graph dynamics. From the point of view of a computer scientist, graph grammars are a natural generalization of Chomsky grammars for which a purely algebraic approach does not exist up to now. A Chomsky (or string) grammar is, roughly speaking, a precise description of a formal language (which in essence is a set of strings). On a more discrete mathematical style, it can be said that graph grammars -- Matrix Graph Grammars in particular -- study dynamics of graphs. Ideally, this algebraization would enforce our understanding of grammars in general, providing new analysis techniques and generalizations of concepts, problems and results known so far.Comment: 321 pages, 75 figures. This book has is publisehd by VDM verlag, ISBN 978-363921255
    corecore