2,953 research outputs found

    Improving WCET Analysis Precision through Automata Product

    Get PDF
    Real-time scheduling of application requires sound estimation of the Worst-Case Execution Time (WCET) of each task. Part of the over-approximation introduced by the WCET analysis of a task comes from not taking into account the fact that the (implicit) worst-case execution path may be infeasible. This article does not address the question of finding infeasible paths but provides a new formalism of automata to describe sets of infeasible paths. This formalism combines the possibilities to express state-based path acceptance (like in regular automata), constraints on counters (in the Implicit Path Enumeration Technique fashion) and contexts of validity (like in State charts). We show the applicability of our proposal by performing infeasible paths aware WCET analyses within the OTAWA framework. We provide algorithms that transform the control flow graph and/or the constraints system supporting the WCET analysis in order to exclude the specified paths

    A Decomposition Approach for the Multi-Modal, Resource-Constrained, Multi-Project Scheduling Problem with Generalized Precedence and Expediting Resources

    Get PDF
    The field of project scheduling has received a great deal of study for many years with a steady evolution of problem complexity and solution methodologies. As solution methodologies and technologies improve, increasingly complex, real-world problems are addressed, presenting researchers a continuing challenge to find ever more effective means for approaching project scheduling. This dissertation introduces a project scheduling problem which is applicable across a broad spectrum of real-world situations. The problem is based on the well-known Resource-Constrained Project Scheduling Problem, extended to include multiple modes, generalized precedence, and expediting resources. The problem is further extended to include multiple projects which have generalized precedence, renewable and nonrenewable resources, and expediting resources at the program level. The problem presented is one not previously addressed in the literature nor is it one to which the existing specialized project scheduling methodologies can be directly applied. This dissertation presents a decomposition approach for solving the problem, including algorithms for solving the decomposed subproblems and the master problem. This dissertation also describes a methodology for generating instances of the new problem, extending the way existing problem generators describe and construct network structures and this class of problem. The methodologies presented are demonstrated through extensive empirical testing

    The integration of multi-color taint-analysis with dynamic symbolic execution for Java web application security analysis

    Get PDF
    The view of IT security in today’s software development processes is changing. While IT security used to be seen mainly as a risk that had to be managed during the operation of IT systems, a class of security weaknesses is seen today as measurable quality aspects of IT system implementations, e.g., the number of paths allowing SQL injection attacks. Current trends, such as DevSecOps pipelines, therefore establish security testing in the development process aiming to catch these security weaknesses before they make their way into production systems. At the same time, the analysis works differently than in functional testing, as security requirements are mostly universal and not project specific. Further, they measure the quality of the source code and not the function of the system. As a consequence, established testing strategies such as unit testing or integration testing are not applicable for security testing. Instead, a new category of tools is required in the software development process: IT security weakness analyzers. These tools scan the source code for security weaknesses independent of the functional aspects of the implementation. In general, such analyzers give stronger guarantees for the presence or absence of security weaknesses than functional testing strategies. In this thesis, I present a combination of dynamic symbolic execution and explicit dynamic multi-color taint analysis for the security analysis of Java web applications. Explicit dynamic taint analysis is an established monitoring technique that allows the precise detection of security weaknesses along a single program execution path, if any are present. Multi-color taint analysis implies that different properties defining diverse security weaknesses can be expressed at the same time in different taint colors and are analyzed in parallel during the execution of a program path. Each taint color analyzes its own security weakness and taint propagation can be tailored in specific sanitization points for this color. The downside of dynamic taint analysis is the single exploration of one path. Therefore, this technique requires a path generator component as counterpart that ensures all relevant paths are explored. Dynamic symbolic execution is appropriate here, as enumerating all reachable execution paths in a program is its established strength. The Jaint framework presented here combines these two techniques in a single tool. More specifically, the thesis looks into SMT meta-solving, extending dynamic symbolic execution on Java programs with string operations, and the configuration problem of multi-color taint analysis in greater detail to enable Jaint for the analysis of Java web applications. The evaluation demonstrates that the resulting framework is the best research tool on the OWASP Benchmark. One of the two dynamic symbolic execution engines that I worked on as part of the thesis has won gold in the Java track of SV-COMP 2022. The other demonstrates that it is possible to lift the implementation design from a research specific JVM to an industry grade JVM, paving the way for the future scaling of Jaint

    Worst-Case Execution Time Guarantees for Runtime-Reconfigurable Architectures

    Get PDF
    Real-time systems are ubiquitous in our everyday life, e.g., in safety-critical domains such as automotive, avionics or robotics. The correctness of a real-time system does not only depend on the correctness of its calculations, but also on the non-functional requirement of adhering to deadlines. Failing to meet a deadline may lead to severe malfunctions, therefore worst-case execution times (WCET) need to be guaranteed. Despite significant scientific advances, however, timing analysis of WCET guarantees lags years behind current high-performance microarchitectures with out-of-order scheduling pipelines, several hardware threads and multiple (shared) cache layers. To satisfy the increasing performance demands of real-time systems, analyzable performance features are required. In order to escape the scarcity of timing-analyzable performance features, the main contribution of this thesis is the introduction of runtime reconfiguration of hardware accelerators onto a field-programmable gate array (FPGA) as a novel means to achieve performance that is amenable to WCET guarantees. Instead of designing an architecture for a specific application domain, this approach preserves the flexibility of the system. First, this thesis contributes novel co-scheduling approaches to distribute work among CPU and GPU in an extensive analysis of how (average-case) performance is achieved on fused CPU-GPU architectures, a main trend in current high-performance microarchitectures that combines a CPU and a GPU on a single chip. Being able to employ such architectures in real-time systems would be highly desirable, because they provide high performance within a limited area and power budget. As a result of this analysis, however, a cache coherency bottleneck is uncovered in recent fused CPU-GPU architectures that share the last level cache between CPU and GPU. This insight (i) complicates performance predictions and (ii) adds a shared last level cache between CPU and GPU to the growing list of microarchitectural features that benefit average-case performance, but render the analysis of WCET guarantees on high-performance architectures virtually infeasible. Thus, further motivating the need for novel microarchitectural features that provide predictable performance and are amenable to timing analysis. Towards this end, a runtime reconfiguration controller called ``Command-based Reconfiguration Queue\u27\u27 (CoRQ) is presented that provides guaranteed latencies for its operations, especially for the reconfiguration delay, i.e., the time it takes to reconfigure a hardware accelerator onto a reconfigurable fabric (e.g., FPGA). CoRQ enables the design of timing-analyzable runtime-reconfigurable architectures that support WCET guarantees. Based on the --now feasible-- guaranteed reconfiguration delay of accelerators, a WCET analysis is introduced that enables tasks to reconfigure application-specific custom instructions (CIs) at runtime. CIs are executed by a processor pipeline and invoke execution of one or more accelerators. Different measures to deal with reconfiguration delays are compared for their impact on accelerated WCET guarantees and overestimation. The timing anomaly of runtime reconfiguration is identified and safely bounded: a case where executing iterations of a computational kernel faster than in WCET during reconfiguration of CIs can prolong the total execution time of a task. Once tasks that perform runtime reconfiguration of CIs can be analyzed for WCET guarantees, the question of which CIs to configure on a constrained reconfigurable area to optimize the WCET is raised. The question is addressed for systems where multiple CIs with different implementations each (allowing to trade-off latency and area requirements) can be selected. This is generally the case, e.g., when employing high-level synthesis. This so-called WCET-optimizing instruction set selection problem is modeled based on the Implicit Path Enumeration Technique (IPET), which is the path analysis technique state-of-the-art timing analyzers rely on. To our knowledge, this is the first approach that enables WCET optimization with support for making use of global program flow information (and information about reconfiguration delay). An optimal algorithm (similar to Branch and Bound) and a fast greedy heuristic algorithm (that achieves the optimal solution in most cases) are presented. Finally, an approach is presented that, for the first time, combines optimized static WCET guarantees and runtime optimization of the average-case execution (maintaining WCET guarantees) using runtime reconfiguration of hardware accelerators by leveraging runtime slack (the amount of time that program parts are executed faster than in WCET). It comprises an analysis of runtime slack bounds that enable safe reconfiguration for average-case performance under WCET guarantees and presents a mechanism to monitor runtime slack using a simple performance counter that is commonly available in many microprocessors. Ultimately, this thesis shows that runtime reconfiguration of accelerators is a key feature to achieve predictable performance

    The Tensor Networks Anthology: Simulation techniques for many-body quantum lattice systems

    Full text link
    We present a compendium of numerical simulation techniques, based on tensor network methods, aiming to address problems of many-body quantum mechanics on a classical computer. The core setting of this anthology are lattice problems in low spatial dimension at finite size, a physical scenario where tensor network methods, both Density Matrix Renormalization Group and beyond, have long proven to be winning strategies. Here we explore in detail the numerical frameworks and methods employed to deal with low-dimension physical setups, from a computational physics perspective. We focus on symmetries and closed-system simulations in arbitrary boundary conditions, while discussing the numerical data structures and linear algebra manipulation routines involved, which form the core libraries of any tensor network code. At a higher level, we put the spotlight on loop-free network geometries, discussing their advantages, and presenting in detail algorithms to simulate low-energy equilibrium states. Accompanied by discussions of data structures, numerical techniques and performance, this anthology serves as a programmer's companion, as well as a self-contained introduction and review of the basic and selected advanced concepts in tensor networks, including examples of their applications.Comment: 115 pages, 56 figure

    Logic Synthesis for Established and Emerging Computing

    Get PDF
    Logic synthesis is an enabling technology to realize integrated computing systems, and it entails solving computationally intractable problems through a plurality of heuristic techniques. A recent push toward further formalization of synthesis problems has shown to be very useful toward both attempting to solve some logic problems exactly--which is computationally possible for instances of limited size today--as well as creating new and more powerful heuristics based on problem decomposition. Moreover, technological advances including nanodevices, optical computing, and quantum and quantum cellular computing require new and specific synthesis flows to assess feasibility and scalability. This review highlights recent progress in logic synthesis and optimization, describing models, data structures, and algorithms, with specific emphasis on both design quality and emerging technologies. Example applications and results of novel techniques to established and emerging technologies are reported

    Multi-model Consistency through Transitive Combination of Binary Transformations

    Get PDF
    Softwaresysteme werden häufig durch eine Vielzahl an Modellen beschrieben, von denen jedes unterschiedliche Systemeigenschaften abbildet. Diese Modelle können geteilte Informationen enthalten, was zu redundanten Beschreibungen und Abhängigkeiten zwischen den Modellen führt. Damit die Systembeschreibung korrekt ist, müssen alle geteilten Informationen zueinander konsistent beschrieben sein. Die Weiterentwicklung eines Modells kann zu Inkonsistenzen mit anderen Modellen des gleichen Systems führen. Deshalb ist es wichtig einen Mechanismus zur Konsistenzwiederherstellung anzuwenden, nachdem Änderungen erfolgt sind. Manuelle Konsistenzwiederherstellung ist fehleranfallig und zeitaufwändig, weshalb eine automatisierte Konsistenzwiederherstellung notwendig ist. Viele existierende Ansätze nutzen binäre Transformationen, um Konsistenz zwischen zwei Modellen wiederherzustellen, jedoch werden Systeme im Allgemeinen durch mehr als zwei Modelle beschrieben. Um Konsistenzerhaltung für mehrere Modelle mit binären Transformationen zu erreichen, müssen diese durch transitive Ausführung kombiniert werden. In dieser Masterarbeit untersuchen wir die transitive Kombination von binären Transformationen und welche Probleme mit ihr einhergehen. Wir entwickeln einen Katalog aus sechs Fehlerpotentialen, die zu Konsistenzfehlern führen können. Das Wissen über diese Fehlerpotentiale kann den Transformationsentwickler über mögliche Probleme beim Kombinieren von Transformationen informieren. Eines der Fehlerpotentiale entsteht als Folge der Topologie des Transformationsnetzwerks und der benutzten Modelltypen, und kann nur durch Topologieänderungen vermieden werden. Ein weiteres Fehlerpotential entsteht, wenn die kombinierten Transformationen versuchen zueinander widersprüchliche Konsistenzregeln zu erfüllen. Dies kann nur durch Anpassung der Konsistenzregeln behoben werden. Beide Fehlerpotentiale sind fallabhängig und können nicht behoben werden, ohne zu wissen, welche Transformationen kombiniert werden. Zusätzlich wurden zwei Implementierungsmuster entworfen, um zwei weitere Fehlerpotentiale zu verhindern. Sie können auf die einzelnen Transformationsdefinitionen angewendet werden, unabhängig davon welche Transformationen letztendlich kombiniert werden. Für die zwei übrigen Fehlerpotentiale wurden noch keine generellen Lösungen gefunden. Wir evaluieren die Ergebnisse mit einer Fallstudie, bestehend aus zwei voneinander unabhängig entwickelten binären Transformationen zwischen einem komponentenbasierten Softwarearchitekturmodell, einem UML Klassendiagramm und der dazugehörigen Java-Implementierung. Alle gefundenen Fehler konnten einem der Fehlerpotentiale zugewiesen werden, was auf die Vollständigkeit des Fehlerkatalogs hindeutet. Die entwickelten Implementierungsmuster konnten alle Fehler beheben, die dem Fehlerpotential zugeordnet wurden, für das sie entworfen wurden, was 70% aller gefundenen Fehler ausgemacht hat. Dies zeigt, dass die Implementierungsmuster tatsächlich anwendbar sind und Fehler verhindern können
    corecore