377 research outputs found

    Annual Report 1999 / Department for Computer Science

    Get PDF
    Selbstdarstellung des Instituts für Informatik der BTU Cottbus und Berichte der Lehrstühle für das Jahr 1999.Presentation of the Department for Computer Science of the BTU Cottbus and reports of the chairs at the department for the year 1999

    Formal Verification of Security Protocol Implementations: A Survey

    Get PDF
    Automated formal verification of security protocols has been mostly focused on analyzing high-level abstract models which, however, are significantly different from real protocol implementations written in programming languages. Recently, some researchers have started investigating techniques that bring automated formal proofs closer to real implementations. This paper surveys these attempts, focusing on approaches that target the application code that implements protocol logic, rather than the libraries that implement cryptography. According to these approaches, libraries are assumed to correctly implement some models. The aim is to derive formal proofs that, under this assumption, give assurance about the application code that implements the protocol logic. The two main approaches of model extraction and code generation are presented, along with the main techniques adopted for each approac

    Systematic Design Methods for Efficient Off-Chip DRAM Access

    No full text
    Typical design flows for digital hardware take, as their input, an abstract description of computation and data transfer between logical memories. No existing commercial high-level synthesis tool demonstrates the ability to map logical memory inferred from a high level language to external memory resources. This thesis develops techniques for doing this, specifically targeting off-chip dynamic memory (DRAM) devices. These are a commodity technology in widespread use with standardised interfaces. In use, the bandwidth of an external memory interface and the latency of memory requests asserted on it may become the bottleneck limiting the performance of a hardware design. Careful consideration of this is especially important when designing with DRAMs, whose latency and bandwidth characteristics depend upon the sequence of memory requests issued by a controller. Throughout the work presented here, we pursue exact compile-time methods for designing application-specific memory systems with a focus on guaranteeing predictable performance through static analysis. This contrasts with much of the surveyed existing work, which considers general purpose memory controllers and optimized policies which improve performance in experiments run using simulation of suites of benchmark codes. The work targets loop-nests within imperative source code, extracting a mathematical representation of the loop-nest statements and their associated memory accesses, referred to as the ‘Polytope Model’. We extend this mathematical representation to represent the physical DRAM ‘row’ and ‘column’ structures accessed when performing memory transfers. From this augmented representation, we can automatically derive DRAM controllers which buffer data in on-chip memory and transfer data in an efficient order. Buffering data and exploiting ‘reuse’ of data is shown to enable up to 50× reduction in the quantity of data transferred to external memory. The reordering of memory transactions exploiting knowledge of the physical layout of the DRAM device allowing to 4× improvement in the efficiency of those data transfers

    Cost-Based Optimization of Integration Flows

    Get PDF
    Integration flows are increasingly used to specify and execute data-intensive integration tasks between heterogeneous systems and applications. There are many different application areas such as real-time ETL and data synchronization between operational systems. For the reasons of an increasing amount of data, highly distributed IT infrastructures, and high requirements for data consistency and up-to-dateness of query results, many instances of integration flows are executed over time. Due to this high load and blocking synchronous source systems, the performance of the central integration platform is crucial for an IT infrastructure. To tackle these high performance requirements, we introduce the concept of cost-based optimization of imperative integration flows that relies on incremental statistics maintenance and inter-instance plan re-optimization. As a foundation, we introduce the concept of periodical re-optimization including novel cost-based optimization techniques that are tailor-made for integration flows. Furthermore, we refine the periodical re-optimization to on-demand re-optimization in order to overcome the problems of many unnecessary re-optimization steps and adaptation delays, where we miss optimization opportunities. This approach ensures low optimization overhead and fast workload adaptation

    Formally designing and implementing cyber security mechanisms in industrial control networks.

    Get PDF
    This dissertation describes progress in the state-of-the-art for developing and deploying formally verified cyber security devices in industrial control networks. It begins by detailing the unique struggles that are faced in industrial control networks and why concepts and technologies developed for securing traditional networks might not be appropriate. It uses these unique struggles and examples of contemporary cyber-attacks targeting control systems to argue that progress in securing control systems is best met with formal verification of systems, their specifications, and their security properties. This dissertation then presents a development process and identifies two technologies, TLA+ and seL4, that can be leveraged to produce a high-assurance embedded security device. The method presented in this dissertation takes an informal design of an embedded device that might be found in a control system and 1) formalizes the design within TLA+, 2) creates and mechanically checks a model built from the formal design, and 3) translates the TLA+ design into a component-based architecture of a native seL4 application. The later chapters of this dissertation describe an application of the process to a security preprocessor embedded device that was designed to add security mechanisms to the network communication of an existing control system. The device and its security properties are formally specified in TLA+ in chapter 4, mechanically checked in chapter 5, and finally its native seL4 architecture is implemented in chapter 6. Finally, the conclusions derived from the research are laid out, as well as some possibilities for expanding the presented method in the future

    The Role of Computers in Research and Development at Langley Research Center

    Get PDF
    This document is a compilation of presentations given at a workshop on the role cf computers in research and development at the Langley Research Center. The objectives of the workshop were to inform the Langley Research Center community of the current software systems and software practices in use at Langley. The workshop was organized in 10 sessions: Software Engineering; Software Engineering Standards, methods, and CASE tools; Solutions of Equations; Automatic Differentiation; Mosaic and the World Wide Web; Graphics and Image Processing; System Design Integration; CAE Tools; Languages; and Advanced Topics

    Indexed dependence metadata and its applications in software performance optimisation

    No full text
    To achieve continued performance improvements, modern microprocessor design is tending to concentrate an increasing proportion of hardware on computation units with less automatic management of data movement and extraction of parallelism. As a result, architectures increasingly include multiple computation cores and complicated, software-managed memory hierarchies. Compilers have difficulty characterizing the behaviour of a kernel in a general enough manner to enable automatic generation of efficient code in any but the most straightforward of cases. We propose the concept of indexed dependence metadata to improve application development and mapping onto such architectures. The metadata represent both the iteration space of a kernel and the mapping of that iteration space from a given index to the set of data elements that iteration might use: thus the dependence metadata is indexed by the kernel’s iteration space. This explicit mapping allows the compiler or runtime to optimise the program more efficiently, and improves the program structure for the developer. We argue that this form of explicit interface specification reduces the need for premature, architecture-specific optimisation. It improves program portability, supports intercomponent optimisation and enables generation of efficient data movement code. We offer the following contributions: an introduction to the concept of indexed dependence metadata as a generalisation of stream programming, a demonstration of its advantages in a component programming system, the decoupled access/execute model for C++ programs, and how indexed dependence metadata might be used to improve the programming model for GPU-based designs. Our experimental results with prototype implementations show that indexed dependence metadata supports automatic synthesis of double-buffered data movement for the Cell processor and enables aggressive loop fusion optimisations in image processing, linear algebra and multigrid application case studies

    Discrete Event Simulations

    Get PDF
    Considered by many authors as a technique for modelling stochastic, dynamic and discretely evolving systems, this technique has gained widespread acceptance among the practitioners who want to represent and improve complex systems. Since DES is a technique applied in incredibly different areas, this book reflects many different points of view about DES, thus, all authors describe how it is understood and applied within their context of work, providing an extensive understanding of what DES is. It can be said that the name of the book itself reflects the plurality that these points of view represent. The book embraces a number of topics covering theory, methods and applications to a wide range of sectors and problem areas that have been categorised into five groups. As well as the previously explained variety of points of view concerning DES, there is one additional thing to remark about this book: its richness when talking about actual data or actual data based analysis. When most academic areas are lacking application cases, roughly the half part of the chapters included in this book deal with actual problems or at least are based on actual data. Thus, the editor firmly believes that this book will be interesting for both beginners and practitioners in the area of DES

    Automata-theoretic protocol programming : parallel computation, threads and their interaction, optimized compilation, [at a] high level of abstraction

    Get PDF
    In the early 2000s, hardware manufacturers shifted their attention from manufacturing faster—yet purely sequential—unicore processors to manufacturing slower—yet increasingly parallel—multicore processors. In the wake of this shift, parallel programming became essential for writing scalable programs on general hardware. Conceptually, every parallel program consists of workers, which implement primary units of sequential computation, and protocols, which implement the rules of interaction that workers must abide by. As programmers have been writing sequential code for decades, programmingand mutual exclusion may serve as a target for compilation. To demonstrate the practical feasibility of the GPL+DSL approach to protocol programming, I study the performance of the implemented compiler and its optimizations through a number of experiments, including the Java version of the NAS Parallel Benchmarks. The experimental results in these benchmarks show that, with all four optimizations in place, compiler-generated protocol code can competewith hand-crafted protocol code. workers poses no new fundamental challenges. What is new—and notoriously difficult—is programming of protocols. In this thesis, I study an approach to protocol programming where programmers implement their workers in an existing general-purpose language (GPL), while they implement their protocols in a complementary domain-specific language (DSL). DSLs for protocols enable programmers to express interaction among workers at a higher level of abstraction than the level of abstraction supported by today’s GPLs, thereby addressing a number of protocol programming issues with today’s GPLs. In particular, in this thesis, I develop a DSL for protocols based on a theory of formal automata and their languages. The specific automata that I consider, called constraint automata, have transition labels with a richer structure than alphabet symbols in classical automata theory. Exactly these richer transition labels make constraint automata suitable for modeling protocols.Constraint automata constitute the (denotational) semantics of the DSL presented in this thesis. On top of this semantics, I use two complementary syntaxes: an existing graphical syntax (based on the coordination language Reo) and a novel textual syntax. The main contribution of this thesis, then, consists of a compiler and four of its optimizations, all formalized and proven correct at the semantic level of constraint automata, using bisimulation. In addition to these theoretical contributions, I also present an implementation of the compiler and its optimizations, which supports Java as the complementary GPL, as plugins for Eclipse. Nothing in the theory developed in this thesis depends on Java, though; any language that supports some form of threading.<br/

    JTorX: Exploring Model-Based Testing

    Get PDF
    The overall goal of the work described in this thesis is: ``To design a flexible tool for state-of-the-art model-based derivation and automatic application of black-box tests for reactive systems, usable both for education and outside an academic context.'' From this goal, we derive functional and non-functional design requirements. The core of the thesis is a discussion of the design, in which we show how the functional requirements are fulfilled. In addition, we provide evidence to validate the non-functional requirements, in the form of case studies and responses to a tool user questionnaire. We describe the overall architecture of our tool, and discuss three usage scenarios which are necessary to fulfill the functional requirements: random on-line testing, guided on-line testing, and off-line test derivation and execution. With on-line testing, test derivation and test execution takes place in an integrated manner: a next test step is only derived when it is necessary for execution. With random testing, during test derivation a random walk through the model is done. With guided testing, during test derivation additional (guidance) information is used, to guide the derivation through specific paths in the model. With off-line testing, test derivation and test execution take place as separate activities. In our architecture we identify two major components: a test derivation engine, which synthesizes test primitives from a given model and from optional test guidance information, and a test execution engine, which contains the functionality to connect the test tool to the system under test. We refer to this latter functionality as the ``adapter''. In the description of the test derivation engine, we look at the same three usage scenarios, and we discuss support for visualization, and for dealing with divergence in the model. In the description of the test execution engine, we discuss three example adapter instances, and then generalise this to a general adapter design. We conclude with a description of extensions to deal with symbolic treatment of data and time
    • …
    corecore