31 research outputs found

    Klever: Verification Framework for Critical Industrial C Programs

    Full text link
    Automatic software verification tools help to find hard-to-detect faults in programs checked against specified requirements non-interactively. Besides, they can prove program correctness formally under certain assumptions. These capabilities are vital for verification of critical industrial programs like operating system kernels and embedded software. However, such programs can contain hundreds or thousands of KLOC that prevent obtaining valuable verification results in any reasonable time when checking non-trivial requirements. Also, existing tools do not provide widely adopted means for environment modeling, specification of requirements, verification of many versions and configurations of target programs, and expert assessment of verification results. In this paper, we present the Klever software verification framework, designed to reduce the effort of applying automatic software verification tools to large and critical industrial C programs.Comment: 53 page

    VST-A: A Foundationally Sound Annotation Verifier

    Full text link
    An interactive program verification tool usually requires users to write formal proofs in a theorem prover like Coq and Isabelle, which is an obstacle for most software engineers. In comparison, annotation verifiers can use assertions in source files as hints for program verification but they themselves do not have a formal soundness proof. In this paper, we demonstrate VST-A, a foundationally sound annotation verifier for sequential C programs. On one hand, users can write high order assertion in C programs' comments. On the other hand, separation logic proofs will be generated in the backend whose proof rules are formally proved sound w.r.t. CompCert's Clight semantics. Residue proof goals in Coq may be generated if some assertion entailments cannot be verified automatically

    An Evidence-Based Faculty Development Program For Online Teaching In Higher Education

    Get PDF
    A critical component in the successful implementation of online education hinges on providing faculty development opportunities that promote the utilization of pedagogical best practices in online teaching. While such training programs are on the rise, institutions are no closer to a universal consensus on how to design and evaluate such efforts. Historically, the success of faculty development programs has been measured via post-completion satisfaction surveys, attendance counts, and faculty perceptions of the usefulness of the content immediately following a training event. However, such metrics rarely provide an accurate measurement of the true efficacy of training, which in the context of online faculty development, points to the adoption of pedagogical best practices in online teaching. There is a clear call in the literature for institutions and faculty developers to adopt evidence-based models in faculty training to identify the strategies that work best. To that end, the purpose of this study was to document how a higher education institution implemented an evidence-based faculty development program for online teaching. The researcher mounted the investigation on a case study framework and centered the lens on the training developers who lent first-hand accounts of their experiences when implementing an evidence-based model. This study explored how the evidence-based program was designed, the factors that led to its implementation, the reported enablers and barriers to its deployment, the role of instructional designers in the program, and the institutional conditions perceived by participants to support the implementation. Data was collected through document analysis and through one-on-one interviews with trainers and middle-managers. The study revealed that traditional methods used to measure training programs (satisfaction surveys, participation counts) were insufficient in providing verification of learning, and that training developers viewed deeper, and more sophisticated methods of program evaluation as desirable. However, training developers also reported concern in regards to the scalability of evidence-based models in higher education and they perceived certain institutional conditions as enablers and barriers. The study also explored the role of the instructional designers as supporters of the learning experience. The researcher suggested several key areas for future investigations to continue to build upon the growing body of knowledge as it relates to supporting faculty teaching online

    Operating System Contribution to Composable Timing Behaviour in High-Integrity Real-Time Systems

    Get PDF
    The development of High-Integrity Real-Time Systems has a high footprint in terms of human, material and schedule costs. Factoring functional, reusable logic in the application favors incremental development and contains costs. Yet, achieving incrementality in the timing behavior is a much harder problem. Complex features at all levels of the execution stack, aimed to boost average-case performance, exhibit timing behavior highly dependent on execution history, which wrecks time composability and incrementaility with it. Our goal here is to restitute time composability to the execution stack, working bottom up across it. We first characterize time composability without making assumptions on the system architecture or the software deployment to it. Later, we focus on the role played by the real-time operating system in our pursuit. Initially we consider single-core processors and, becoming less permissive on the admissible hardware features, we devise solutions that restore a convincing degree of time composability. To show what can be done for real, we developed TiCOS, an ARINC-compliant kernel, and re-designed ORK+, a kernel for Ada Ravenscar runtimes. In that work, we added support for limited-preemption to ORK+, an absolute premiere in the landscape of real-word kernels. Our implementation allows resource sharing to co-exist with limited-preemptive scheduling, which extends state of the art. We then turn our attention to multicore architectures, first considering partitioned systems, for which we achieve results close to those obtained for single-core processors. Subsequently, we shy away from the over-provision of those systems and consider less restrictive uses of homogeneous multiprocessors, where the scheduling algorithm is key to high schedulable utilization. To that end we single out RUN, a promising baseline, and extend it to SPRINT, which supports sporadic task sets, hence matches real-world industrial needs better. To corroborate our results we present findings from real-world case studies from avionic industry

    Model for WCET prediction, scheduling and task allocation for emergent agent-behaviours in real-time scenarios

    Get PDF
    [ES]Hasta el momento no se conocen modelos de tiempo real espec铆ficamente desarrollados para su uso en sistemas abiertos, como las Organizaciones Virtuales de Agentes (OVs). Convencionalmente, los modelos de tiempo real se aplican a sistemas cerrados donde todas las variables se conocen a priori. Esta tesis presenta nuevas contribuciones y la novedosa integraci贸n de agentes en tiempo real dentro de OVs. Hasta donde alcanza nuestro conocimiento, 茅ste es el primer modelo espec铆ficamente dise帽ado para su aplicaci贸n en OVs con restricciones temporales estrictas. Esta tesis proporciona una nueva perspectiva que combina la apertura y dinamicidad necesarias en una OV con las restricciones de tiempo real. 脡sto es una aspecto complicado ya que el primer paradigma no es estricto, como el propio t茅rmino de sistema abierto indica, sin embargo, el segundo paradigma debe cumplir estrictas restricciones. En resumen, el modelo que se presenta permite definir las acciones que una OV debe llevar a cabo con un plazo concreto, considerando los cambios que pueden ocurrir durante la ejecuci贸n de un plan particular. Es una planificaci贸n de tiempo real en una OV. Otra de las principales contribuciones de esta tesis es un modelo para el c谩lculo del tiempo de ejecuci贸n en el peor caso (WCET). La propuesta es un modelo efectivo para calcular el peor escenario cuando un agente desea formar parte de una OV y para ello, debe incluir sus tareas o comportamientos dentro del sistema de tiempo real, es decir, se calcula el WCET de comportamientos emergentes en tiempo de ejecuci贸n. Tambi茅n se incluye una planificaci贸n local para cada nodo de ejecuci贸n basada en el algoritmo FPS y una distribuci贸n de tareas entre los nodos disponibles en el sistema. Para ambos modelos se usan modelos matem谩ticos y estad铆sticos avanzados para crear un mecanismo adaptable, robusto y eficiente para agentes inteligentes en OVs. El desconocimiento, pese al estudio realizado, de una plataforma para sistemas abiertos que soporte agentes con restricciones de tiempo real y los mecanismos necesarios para el control y la gesti贸n de OVs, es la principal motivaci贸n para el desarrollo de la plataforma de agentes PANGEA+RT. PANGEA+RT es una innovadora plataforma multi-agente que proporciona soporte para la ejecuci贸n de agentes en ambientes de tiempo real. Finalmente, se presenta un caso de estudio donde robots heterog茅neos colaboran para realizar tareas de vigilancia. El caso de estudio se ha desarrollado con la plataforma PANGEA+RT donde el modelo propuesto est谩 integrado. Por tanto al final de la tesis, con este caso de estudio se obtienen los resultados y conclusiones que validan el modelo

    Parallel and Distributed Computing

    Get PDF
    The 14 chapters presented in this book cover a wide variety of representative works ranging from hardware design to application development. Particularly, the topics that are addressed are programmable and reconfigurable devices and systems, dependability of GPUs (General Purpose Units), network topologies, cache coherence protocols, resource allocation, scheduling algorithms, peertopeer networks, largescale network simulation, and parallel routines and algorithms. In this way, the articles included in this book constitute an excellent reference for engineers and researchers who have particular interests in each of these topics in parallel and distributed computing

    Mixed Criticality Systems - A Review : (13th Edition, February 2022)

    Get PDF
    This review covers research on the topic of mixed criticality systems that has been published since Vestal鈥檚 2007 paper. It covers the period up to end of 2021. The review is organised into the following topics: introduction and motivation, models, single processor analysis (including job-based, hard and soft tasks, fixed priority and EDF scheduling, shared resources and static and synchronous scheduling), multiprocessor analysis, related topics, realistic models, formal treatments, systems issues, industrial practice and research beyond mixed-criticality. A list of PhDs awarded for research relating to mixed-criticality systems is also included
    corecore