3,373 research outputs found

    A framework for computer-aided validation

    Get PDF
    This paper presents a framework to incorporate computer-based validation techniques to the independent validation and verification (IV&V) of software systems. The framework allows the IV&V team to capture its own understanding of the problem and the expected behavior of any proposed system for solving the problem via an executable system reference model, which uses formal assertions to specify mission- and safety-critical behaviors. The framework uses execution-based model checking to validate the correctness of the assertions and to verify the correctness and adequacy of the system under test.Approved for public release; distribution is unlimited

    Towards the Model-Driven Engineering of Secure yet Safe Embedded Systems

    Full text link
    We introduce SysML-Sec, a SysML-based Model-Driven Engineering environment aimed at fostering the collaboration between system designers and security experts at all methodological stages of the development of an embedded system. A central issue in the design of an embedded system is the definition of the hardware/software partitioning of the architecture of the system, which should take place as early as possible. SysML-Sec aims to extend the relevance of this analysis through the integration of security requirements and threats. In particular, we propose an agile methodology whose aim is to assess early on the impact of the security requirements and of the security mechanisms designed to satisfy them over the safety of the system. Security concerns are captured in a component-centric manner through existing SysML diagrams with only minimal extensions. After the requirements captured are derived into security and cryptographic mechanisms, security properties can be formally verified over this design. To perform the latter, model transformation techniques are implemented in the SysML-Sec toolchain in order to derive a ProVerif specification from the SysML models. An automotive firmware flashing procedure serves as a guiding example throughout our presentation.Comment: In Proceedings GraMSec 2014, arXiv:1404.163

    Learning to deal with COTS (commercial off the shelf)

    Get PDF
    With the advent of model based development technologies, dependence of COTS in software development has increased considerably. Use of COTS is considered economical and practical when it comes to integration of various software components. However COTS are trapped with some pitfalls. COTS provided are not usually accompanied by models or extensive specifications. This approach makes usage & integration of COTS components with in house developed software components a very challenging task. Conformance of the implementation with the specification forms the basis for our approach. In this thesis, we analyze an approach where the model is extracted from the COTS software that greatly aids in integration.;We developed a system that extracts the state machine model from the COTS software using Dana Angluin\u27s L* Algorithm. We also developed a hierarchical approach of viewing the state machine model by static analysis of assembly code

    Automated Injection of Curated Knowledge Into Real-Time Clinical Systems: CDS Architecture for the 21st Century

    Get PDF
    abstract: Clinical Decision Support (CDS) is primarily associated with alerts, reminders, order entry, rule-based invocation, diagnostic aids, and on-demand information retrieval. While valuable, these foci have been in production use for decades, and do not provide a broader, interoperable means of plugging structured clinical knowledge into live electronic health record (EHR) ecosystems for purposes of orchestrating the user experiences of patients and clinicians. To date, the gap between knowledge representation and user-facing EHR integration has been considered an “implementation concern” requiring unscalable manual human efforts and governance coordination. Drafting a questionnaire engineered to meet the specifications of the HL7 CDS Knowledge Artifact specification, for example, carries no reasonable expectation that it may be imported and deployed into a live system without significant burdens. Dramatic reduction of the time and effort gap in the research and application cycle could be revolutionary. Doing so, however, requires both a floor-to-ceiling precoordination of functional boundaries in the knowledge management lifecycle, as well as formalization of the human processes by which this occurs. This research introduces ARTAKA: Architecture for Real-Time Application of Knowledge Artifacts, as a concrete floor-to-ceiling technological blueprint for both provider heath IT (HIT) and vendor organizations to incrementally introduce value into existing systems dynamically. This is made possible by service-ization of curated knowledge artifacts, then injected into a highly scalable backend infrastructure by automated orchestration through public marketplaces. Supplementary examples of client app integration are also provided. Compilation of knowledge into platform-specific form has been left flexible, in so far as implementations comply with ARTAKA’s Context Event Service (CES) communication and Health Services Platform (HSP) Marketplace service packaging standards. Towards the goal of interoperable human processes, ARTAKA’s treatment of knowledge artifacts as a specialized form of software allows knowledge engineers to operate as a type of software engineering practice. Thus, nearly a century of software development processes, tools, policies, and lessons offer immediate benefit: in some cases, with remarkable parity. Analyses of experimentation is provided with guidelines in how choice aspects of software development life cycles (SDLCs) apply to knowledge artifact development in an ARTAKA environment. Portions of this culminating document have been further initiated with Standards Developing Organizations (SDOs) intended to ultimately produce normative standards, as have active relationships with other bodies.Dissertation/ThesisDoctoral Dissertation Biomedical Informatics 201

    Conformance Checking Based on Multi-Perspective Declarative Process Models

    Full text link
    Process mining is a family of techniques that aim at analyzing business process execution data recorded in event logs. Conformance checking is a branch of this discipline embracing approaches for verifying whether the behavior of a process, as recorded in a log, is in line with some expected behaviors provided in the form of a process model. The majority of these approaches require the input process model to be procedural (e.g., a Petri net). However, in turbulent environments, characterized by high variability, the process behavior is less stable and predictable. In these environments, procedural process models are less suitable to describe a business process. Declarative specifications, working in an open world assumption, allow the modeler to express several possible execution paths as a compact set of constraints. Any process execution that does not contradict these constraints is allowed. One of the open challenges in the context of conformance checking with declarative models is the capability of supporting multi-perspective specifications. In this paper, we close this gap by providing a framework for conformance checking based on MP-Declare, a multi-perspective version of the declarative process modeling language Declare. The approach has been implemented in the process mining tool ProM and has been experimented in three real life case studies

    Contract representation for validation and run time monitoring

    Get PDF
    PhD ThesisOrganisations are increasingly using the Internet to offer their own services and to utilise the services of others. This naturally leads to resource sharing across organisational boundaries. Nevertheless, organisations will require their interactions with other organisations to be strictly controlled. In the paper-based world, business interactions, information exchange and sharing have been conducted under the control of contracts that the organisations sign. The world of electronic business needs to emulate electronic equivalents of the contract based business management practices. This thesis examines how a 'conventional' contract can be converted into its electronic equivalent and how it can be used for controlling business interactions taking place through computer messages. To implement a contract electronically, a conventional text contract needs to be described in a mathematically precise notation so that the description can be subjected to rigorous analysis and freed from the ambiguities that the original humanoriented text is likely to contain. Furthermore, a suitable run time infrastructure is required for monitoring the executable version of the contract. To address these issues, this thesis describes how standard conventional contracts can be converted into Finite State Machines (FSMs). It is illustrated how to map the rights and obligations extracted from the clauses of the contract into the states, transition and output functions, and input and output symbols of a FSM. The thesis then goes on to develop a list of correctness properties that a typical executable business contract should satisfy. A contract model should be validated against safety properties, which specify situations that the contract must not get into (such as deadlocks, unreachable states .... etc), and liveness properties, which detail qualities that would be desirable for the contract to contain (responsiveness, accessibility .... etc). The FSM description can then be subjected to model checking. This is demonstrated with the aid of examples using the Promela language and the Spin validator. Subsequently, the FSM representation can be used to ensure that the clauses stipulated in the contract are observed when the contract is executed. The requirements of a suitable run time infrastructure for monitoring contract compliance are discussed and a prototype middleware implementation is presented.UK Engineering and Physical Sciences Research Council (EPSRC)

    Towards a formally designed and verified embedded operating system: case study using the B method

    Get PDF
    The dramatic growth in practical applications for iris biometrics has been accompanied by relevant developments in the underlying algorithms and techniques. Along with the research focused on near-infrared images captured with subject cooperation, e orts are being made to minimize the trade-o between the quality of the captured data and the recognition accuracy on less constrained environments, where images are obtained at the visible wavelength, at increased distances, over simpli ed acquisition protocols and adverse lightning conditions. At a rst stage, interpolation e ects on normalization process are addressed, pointing the outcomes in the overall recognition error rates. Secondly, a couple of post-processing steps to the Daugman's approach are performed, attempting to increase its performance in the particular unconstrained environments this thesis assumes. Analysis on both frequency and spatial domains and nally pattern recognition methods are applied in such e orts. This thesis embodies the study on how subject recognition can be achieved, without his cooperation, making use of iris data captured at-a-distance, on-the-move and at visible wavelength conditions. Widely used methods designed for constrained scenarios are analyzed
    • …
    corecore