394 research outputs found

    Verification of Concurrent Systems : optimality, Scalability and Applicability

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, leída el 14-10-2020Tanto el testing como la verificacion de sistemas concurrentes requieren explorar todos los posibles entrelazados no deterministas que la ejecucion concurrente puede tener, ya que cualquiera de estos entrelazados podra revelar un comportamiento erroneo del sistema. Esto introduce una explosion combinatoria en el numero de estados del programa que deben ser considerados, lo que frecuentemente lleva a un problema computacionalmente intratable. El objetivo de esta tesis es el desarrollo de tecnicas novedosas para el testing y la verificacion de programas concurrentes que permitan reducir esta explosion combinatoria...Both verification and testing of concurrent systems require exploring all possible non-deterministic interleavings that the concurrent execution may have, as any of the interleavings may reveal an erroneous behavior of the system. This introduces a combinatorial explosion on the number of program states that must be considered, what leads often to a computationally intractable problem. The overall goal of this thesis is to investigate novel techniques for testing and verification of concurrent programs that reduce this combinatorial explosion...Fac. de InformáticaTRUEunpu

    Formally Integrating Real-Time Specification: A Research Proposal

    Get PDF
    To date, research in reasoning about timing properties of real-time programs has considered specification and implementation as separate issues. Specification uses formal methods; it abstracts out program execution, defining a specification that is independent of any machine-specific details (see [I, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14] for examples). In this manner, it describes only the high-level timing requirements of processes in the system, and dependencies between them. One then typically attempts to prove the mutual consistency of these timing constraints, or to determine whether the constraints maintain a safety property critical to system correctness. However, since the model has abstracted out machine-specific details, these correctness proofs either assume very optimistic operating environment (such as a one to one assignment of processes to processors), or make very pessimistic assumptions (such as that all interleavings of process executions are possible). Since neither of these assumptions will hold in practice, these predictions about the behavior of the system may not be accurate. The implementation level captures this operating environment: a real- time system is characterized by such things as process schedulers, devices and local clocks. However, advances here have been primarily in scheduling theory (examples of which are [15, 16]) and language design (examples of which are [15, 16, 17, 18,19,20]). Unfortunately, since formal models have not been used at this level, proofs of time-related properties cannot be made. To construct these proofs, we must show that an implementation is correct with respect to a specification; timing properties that can be shown to hold about the specification will therefore be known to hold for the implementation. We therefore need to represent the implementation formally so as to prove that the implementation satisfies the specification. The proof of satisfaction requires a well-defined formal mapping between the implementation and specification models. We therefore propose to develop an integrated bi-level approach to the problem of reasoning about timing properties of real-time programs. At the specification level, we will use the Timed Acceptances model, a logically sound and complete axiom system which we have recently developed [21]. Using this model, the effect of interaction among time dependent processes can be precisely specified and then analyzed. We will then develop a formal implementation model (similar to the specification model) which captures operational behaviors: for example, the assignment of processes to processors, assumptions about scheduling and clock synchronization, and the different treatment of execution and wait times. A mapping will then be formulated between these two layers. The bulk of our proposed work will be to formulate the implementation layer and define a mapping between it and the specification layer. We also need to continue work on the Timed Acceptances model to facilitate its use as a specification model, and to provide hooks for mappings between the two layers. The rest of this proposal is organized as follows. The next section overviews related work in formal specification models. Section 3 describes our current specification model and proposed enhancements. We also detail the proposed implementation model, and required properties of the mappings between the two models. Section 4 provides a summary of the proposed research, and a yearly plan

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications

    Master/worker parallel discrete event simulation

    Get PDF
    The execution of parallel discrete event simulation across metacomputing infrastructures is examined. A master/worker architecture for parallel discrete event simulation is proposed providing robust executions under a dynamic set of services with system-level support for fault tolerance, semi-automated client-directed load balancing, portability across heterogeneous machines, and the ability to run codes on idle or time-sharing clients without significant interaction by users. Research questions and challenges associated with issues and limitations with the work distribution paradigm, targeted computational domain, performance metrics, and the intended class of applications to be used in this context are analyzed and discussed. A portable web services approach to master/worker parallel discrete event simulation is proposed and evaluated with subsequent optimizations to increase the efficiency of large-scale simulation execution through distributed master service design and intrinsic overhead reduction. New techniques for addressing challenges associated with optimistic parallel discrete event simulation across metacomputing such as rollbacks and message unsending with an inherently different computation paradigm utilizing master services and time windows are proposed and examined. Results indicate that a master/worker approach utilizing loosely coupled resources is a viable means for high throughput parallel discrete event simulation by enhancing existing computational capacity or providing alternate execution capability for less time-critical codes.Ph.D.Committee Chair: Fujimoto, Richard; Committee Member: Bader, David; Committee Member: Perumalla, Kalyan; Committee Member: Riley, George; Committee Member: Vuduc, Richar

    A Methodology for Information Flow Experiments

    Full text link
    Information flow analysis has largely ignored the setting where the analyst has neither control over nor a complete model of the analyzed system. We formalize such limited information flow analyses and study an instance of it: detecting the usage of data by websites. We prove that these problems are ones of causal inference. Leveraging this connection, we push beyond traditional information flow analysis to provide a systematic methodology based on experimental science and statistical analysis. Our methodology allows us to systematize prior works in the area viewing them as instances of a general approach. Our systematic study leads to practical advice for improving work on detecting data usage, a previously unformalized area. We illustrate these concepts with a series of experiments collecting data on the use of information by websites, which we statistically analyze

    Benchmarking Eventually Consistent Distributed Storage Systems

    Get PDF
    Cloud storage services and NoSQL systems typically offer only "Eventual Consistency", a rather weak guarantee covering a broad range of potential data consistency behavior. The degree of actual (in-)consistency, however, is unknown. This work presents novel solutions for determining the degree of (in-)consistency via simulation and benchmarking, as well as the necessary means to resolve inconsistencies leveraging this information

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    Including the workload effect in the parallel program signature

    Get PDF
    Performance prediction and application behavior modeling have been the subject of exten- sive research that aim to estimate applications performance with an acceptable precision. A novel approach to predict the performance of parallel applications is based in the con- cept of Parallel Application Signatures that consists in extract an application most relevant parts (phases) and the number of times they repeat (weights). Executing these phases in a target machine and multiplying its exeuction time by its weight an estimation of the application total execution time can be made. One of the problems is that the performance of an application depends on the program workload. Every type of workload affects differently how an application performs in a given system and so affects the signature execution time. Since the workloads used in most scientific parallel applications have dimensions and data ranges well known and the behavior of these applications are mostly deterministic, a model of how the programs workload affect its performance can be obtained. We create a new methodology to model how a program's workload affect the parallel application signature. Using regression analysis we are able to generalize each phase time execution and weight function to predict an application performance in a target system for any type of workload within predefined range. We validate our methodology using a synthetic program, benchmarks applications and well known real scientific applications.La predicción del rendimiento y el modelado del comportamiento de las aplicaciones son tópicos ampliamente estudiados y se cuentan con numerosos trabajos de investigación que pretenden estimar el rendimiento de la aplicaciones con una precisión aceptable. Un nuevo enfoque para predecir el rendimiento de aplicaciones paralelas es el basado en el concepto de las firmas de aplicaciones paralelas que consiste en extraer las partes mas relevantes de una aplicación (fases) y el número de veces que se repiten (pesos). Ejecutando estas fases en una máquina destino y multiplicando su tiempo de ejecución por su peso, se puede obtener una estimación del tiempo total de ejecución de la aplicación. Uno de los problemas es que el rendimiento de una aplicación depende de la carga de trabajo de esta. Cada tipo de carga de trabajo afecta de manera distinta el rendimiento que tiene una aplicación en un sistema determinado y por lo tanto el tiempo de ejecución de la firma. Dado que las cargas de trabajo de la mayoría de las aplicaciones científicas paralelas, tienen dimensiones y rango de datos bien conocidos y que el comportamiento de estas aplicaciones es generalmente determinista, se puede obtener un modelo de cómo la carga de trabajo de un programa afecta su rendimiento. Hemos creado una nueva metodología para modelar cómo la carga de trabajo de un programa afecta a la firma de la aplicación paralela. Usando análisis de regresión, hemos podido generalizar las funciones de tiempo de ejecución y peso para cada fase para predecir el rendimiento de una aplicación en un sistema destino para cualquier tipo de carga de trabajo dentro de un rango predefinido. Hemos validado nuestra metodología utilizando un programa sintético, aplicaciones de benchmarks y aplicaciones reales científicas bien conocidas
    corecore