184 research outputs found

    Semiformal Verification of Embedded Software in Medical Devices Considering Stringent Hardware Constraints

    No full text
    In recent days, the complexity of software has increased significantly in embedded products in such a way that the verification of Embedded Software (ESW) now plays an important role to ensure the product's quality. Embedded systems engineers usually face the problems of verifying properties that have to meet the application's deadline, access the memory region, handle concurrency, and control the hardware registers. This work proposes a semiformal verification approach that combines dynamic and static verification to stress and cover exhaustively the state space of the system. We perform a case study on embedded software used in the medical devices domain. We conclude that the proposed approach improves the coverage and reduces substantially the verification time

    An approach to enacting business process models in support of the life cycle of integrated manufacturing systems

    Get PDF
    The complexity of enterprise engineering processes requires the application of reference architectures as means of guiding the achievement of an adequate level of business integration. This research aims to address important aspects of this requirement by associating the formalism of reference architectures to various life cycle phases of integrating manufacturing systems (IMS) and enabling their use in addressing contemporary system engineering issues. In pursuit of this aim, the following research activities were carried out: (1) to devise a framework which supports key phases of the IMS life cycle and (2) to populate part of this framework with an initial combination of architectures which can be encapsulated into a computer-aided systems engineering environment. This has led to the creation of a workbench capable of providing support for modelling, analysis, simulation, rapid-prototyping, configuration and run-time operation of an IMS, based on a consistent set of models associated with the engineering processes involved. The research effort concentrated on selecting and investigating the use of appropriate formalisms which underpin a selection of architectures and tools (i. e. CIM-OSA, Petrinets, object-oriented methods and CIM-BIOSYS), this by designing, implementing, applying and testing the workbench. The main contribution of this research is to demonstrate that it is possible to retain an adequate level of formalism, via computational structures and models, which extend through the IMS life cycle from a conceptual description of the system through to actions that the system performs when operating. The underlying methodology which supported this contribution is based on enacting models of system behaviour which encode important coordination aspects of manufacturing systems. The strategy for demonstrating the incorporation of formalism to the IMS life cycle was to enable the aggregation into a workbench of knowledge of 'what' the system is expected to achieve (i. e. 'problems' to be addressed) and 'how' the system can achieve it (i. e possible 'solutions'). Within the workbench, such a knowledge is represented through an amalgamation of business process modelling and object-oriented modelling approaches which, when adequately manipulated, can lead to business integration

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    Scalable system software for high performance large-scale applications

    Get PDF
    In the last decades, high-performance large-scale systems have been a fundamental tool for scientific discovery and engineering advances. The sustained growth of supercomputing performance and the concurrent reduction in cost have made this technology available for a large number of scientists and engineers working on many different problems. The design of next-generation supercomputers will include traditional HPC requirements as well as the new requirements to handle data-intensive computations. Data intensive applications will hence play an important role in a variety of fields, and are the current focus of several research trends in HPC. Due to the challenges of scalability and power efficiency, next-generation of supercomputers needs a redesign of the whole software stack. Being at the bottom of the software stack, system software is expected to change drastically to support the upcoming hardware and to meet new application requirements. This PhD thesis addresses the scalability of system software. The thesis start at the Operating System level: first studying general-purpose OS (ex. Linux) and then studying lightweight kernels (ex. CNK). Then, we focus on the runtime system: we implement a runtime system for distributed memory systems that includes many of the system services required by next-generation applications. Finally we focus on hardware features that can be exploited at user-level to improve applications performance, and potentially included into our advanced runtime system. The thesis contributions are the following: Operating System Scalability: We provide an accurate study of the scalability problems of modern Operating Systems for HPC. We design and implement a methodology whereby detailed quantitative information may be obtained for each OS noise event. We validate our approach by comparing it to other well-known standard techniques to analyze OS noise, such FTQ (Fixed Time Quantum). Evaluation of the address translation management for a lightweight kernel: we provide a performance evaluation of different TLB management approaches ¿ dynamic memory mapping, static memory mapping with replaceable TLB entries, and static memory mapping with fixed TLB entries (no TLB misses) on a IBM BlueGene/P system. Runtime System Scalability: We show that a runtime system can efficiently incorporate system services and improve scalability for a specific class of applications. We design and implement a full-featured runtime system and programming model to execute irregular appli- cations on a commodity cluster. The runtime library is called Global Memory and Threading library (GMT) and integrates a locality-aware Partitioned Global Address Space communication model with a fork/join program structure. It supports massive lightweight multi-threading, overlapping of communication and computation and small messages aggregation to tolerate network latencies. We compare GMT to other PGAS models, hand-optimized MPI code and custom architectures (Cray XMT) on a set of large scale irregular applications: breadth first search, random walk and concurrent hash map access. Our runtime system shows performance orders of magnitude higher than other solutions on commodity clusters and competitive with custom architectures. User-level Scalability Exploiting Hardware Features: We show the high complexity of low-level hardware optimizations for single applications, as a motivation to incorporate this logic into an adaptive runtime system. We evaluate the effects of controllable hardware-thread priority mechanism that controls the rate at which each hardware-thread decodes instruction on IBM POWER5 and POWER6 processors. Finally, we show how to effectively exploits cache locality and network-on-chip on the Tilera many-core architecture to improve intra-core scalability

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    Exploring instructional leadership practices of school principals : a case study of three secondary schools in Umbumbulu circuit.

    Get PDF
    Thesis (M.Ed.)-University of KwaZulu-Natal, Durban, 2012.There are substantive external demands for improved learner achievement, particularly in secondary schools, and increasingly, principals have to bear the pressures that accompany these demands. Instructional leadership concept is being advocated one of the approaches that school leaders may consider in order to promote a culture of teaching and learning within their schools. Therefore a qualitative case study was undertaken to explore instructional leadership practices of three secondary school principals in Umbumbulu Circuit. The focus of the study was based on the assumption that principals were instructional leaders as it was the expectation of government policy. The study therefore, did not seek to find out if principals in the study were indeed instructional leaders, but it sought to understand the manner in which they practicalised this expectation. In short, the study sought to gain an insight into how secondary school principals in this area enacted instructional leadership and why they enacted it the way they did. Three schools were selected among those schools that had experienced drastic improvement in their matric results in the past five years or so. The research design employed was qualitative and semi-structured interviews with three principals and three educators. These interviews were audio taped and transcribed for analysis. The results indicated that principals enacted instructional leadership practices by (a) sharing vision among members of the school (b) monitoring instructions (c) encouraging professional development of their teaching staff (d) ensuring that instructional time was not interrupted (e) furnishing professional materials and resources to the teachers (f) monitoring and discussion assessment issues with the teachers (g) recognising and rewarding good performance and (h) preparing and sustaining learning environment that is conducive to teaching and learning. The main aim was to enhance teaching and learning in the schools as these principals strongly believed that it was their responsibility to do so

    Availability by Design:A Complementary Approach to Denial-of-Service

    Get PDF

    A framework for automated concurrency verification

    Get PDF
    Reasoning systems based on Concurrent Separation Logic make verifying complex concurrent algorithms readily possible. Such algorithms contain subtle protocols of permission and resource transfer between threads; to cope with these intricacies, modern concurrent separation logics contain many moving parts and integrate many bespoke logical components. Verifying concurrent algorithms by hand consumes much time, effort, and expertise. As a result, computer-assisted verification is a fertile research topic, and fully automated verification is a popular research goal. Unfortunately, the complexity of modern concurrent separation logics makes them hard to automate, and the proliferation and fast turnover of such logics causes a downward pressure against building tools for new logics. As a result, many such logics lack tooling. This dissertation proposes Starling: a scheme for creating concurrent program logics that are automatable by construction. Starling adapts the existing Concurrent Views Framework for sound concurrent reasoning systems, overlaying a framework for reducing concurrent proof outlines to verification conditions in existing theories (such as those accepted by off-the-shelf sequential solvers). This dissertation describes Starling in a bottom-up, modular manner. First, it shows the derivation of a series of general concurrency proof rules from the Views framework. Next, it shows how one such rule leads to the Starling framework itself. From there, it outlines a series of increasingly elaborate frontends: ways of decomposing individual Hoare triples over atomic actions into verification conditions suitable for encoding into backend theories. Each frontend leads to a concurrent program logic. Finally, the dissertation presents a tool for verifying C-style concurrent proof outlines, based on one of the above frontends. It gives examples of such outlines, covering a variety of algorithms, backend solvers, and proof techniques
    corecore