2,762 research outputs found

    A probabilistic model checking approach to analysing reliability, availability, and maintainability of a single satellite system

    Get PDF
    Satellites now form a core component for space based systems such as GPS and GLONAS which provide location and timing information for a variety of uses. Such satellites are designed to operate in-orbit and have lifetimes of 10 years or more. Reliability, availability and maintainability (RAM) analysis of these systems has been indispensable in the design phase of satellites in order to achieve minimum failures or to increase mean time between failures (MTBF) and thus to plan maintainability strategies, optimise reliability and maximise availability. In this paper, we present formal modelling of a single satellite and logical specification of its reliability, availability and maintainability properties. The probabilistic model checker PRISM has been used to perform automated quantitative analyses of these properties

    On-Line Dependability Enhancement of Multiprocessor SoCs by Resource Management

    Get PDF
    This paper describes a new approach towards dependable design of homogeneous multi-processor SoCs in an example satellite-navigation application. First, the NoC dependability is functionally verified via embedded software. Then the Xentium processor tiles are periodically verified via on-line self-testing techniques, by using a new IIP Dependability Manager. Based on the Dependability Manager results, faulty tiles are electronically excluded and replaced by fault-free spare tiles via on-line resource management. This integrated approach enables fast electronic fault detection/diagnosis and repair, and hence a high system availability. The dependability application runs in parallel with the actual application, resulting in a very dependable system. All parts have been verified by simulation

    Requirements modelling and formal analysis using graph operations

    Get PDF
    The increasing complexity of enterprise systems requires a more advanced analysis of the representation of services expected than is currently possible. Consequently, the specification stage, which could be facilitated by formal verification, becomes very important to the system life-cycle. This paper presents a formal modelling approach, which may be used in order to better represent the reality of the system and to verify the awaited or existing system’s properties, taking into account the environmental characteristics. For that, we firstly propose a formalization process based upon properties specification, and secondly we use Conceptual Graphs operations to develop reasoning mechanisms of verifying requirements statements. The graphic visualization of these reasoning enables us to correctly capture the system specifications by making it easier to determine if desired properties hold. It is applied to the field of Enterprise modelling

    Domain specific software design for decision aiding

    Get PDF
    McDonnell Aircraft Company (MCAIR) is involved in many large multi-discipline design and development efforts of tactical aircraft. These involve a number of design disciplines that must be coordinated to produce an integrated design and a successful product. Our interpretation of a domain specific software design (DSSD) is that of a representation or framework that is specialized to support a limited problem domain. A DSSD is an abstract software design that is shaped by the problem characteristics. This parallels the theme of object-oriented analysis and design of letting the problem model directly drive the design. The DSSD concept extends the notion of software reusability to include representations or frameworks. It supports the entire software life cycle and specifically leads to improved prototyping capability, supports system integration, and promotes reuse of software designs and supporting frameworks. The example presented in this paper is the task network architecture or design which was developed for the MCAIR Pilot's Associate program. The task network concept supported both module development and system integration within the domain of operator decision aiding. It is presented as an instance where a software design exhibited many of the attributes associated with DSSD concept

    Multi-objective optimisation of product modularity

    Get PDF
    The optimal modular configuration of a product’s architecture can lead to many advantages throughout the product lifecycle. Advantages such as: ease of product upgrade, maintenance, repair and disposal, increased product variety and greater product development speed. However, finding an optimal modular configuration is often difficult. Finding a solution will invariably mean trade-offs will have to be made between various lifecycle drivers. One of the main strengths of a computerised optimisation is that trade-off analysis becomes simple and straightforward and hence speeds up the product architecture decision making process. However, there are a lack of computerised methods that can be applied to optimise modularity for multiple lifecycle objectives. To this end, an integrated optimisation framework has been developed to optimise modularity from a whole lifecycle perspective, namely, design, production, use and end of life. For each lifecycle phase there are two modularity criteria- module independence and module coherence. The criteria that fall under the category of module independence evaluate the degree of coupling between the products components, coupling can be physical, functional or design based. Criteria under module coherence, evaluate the similarity of modular drivers between components. The paper will examine the developed optimisation framework and software prototype. The prototype software uses a number of matrixes to represent the product architecture. A goal based genetic algorithm is used to search the matrixes for modular configurations that most satisfies the criteria of the four lifecycle phases. Sensitively analysis is carried out by changing the goal weights

    Implementing Privacy Policy: Who Should Do What?

    Get PDF
    Academic scholarship on privacy has focused on the substantive rules and policies governing the protection of personal data. An extensive literature has debated alternative approaches for defining how private and public institutions can collect and use information about individuals. But, the attention given to the what of U.S. privacy regulation has overshadowed consideration of how and by whom privacy policy should be formulated and implemented. U.S. privacy policy is an amalgam of activity by a myriad of federal, state, and local government agencies. But, the quality of substantive privacy law depends greatly on which agency or agencies are running the show. Unfortunately, such implementation-related matters have been discounted or ignored— with the clear implication that they only need to be addressed after the “real” work of developing substantive privacy rules is completed. As things stand, the development and implementation of U.S. privacy policy is compromised by the murky allocation of responsibilities and authority among federal, state, and local governmental entities—compounded by the inevitable tensions associated with the large number of entities that are active in this regulatory space. These deficiencies have had major adverse consequences, both domestically and internationally. Without substantial upgrades of institutions and infrastructure, privacy law and policy will continue to fall short of what it could (and should) achieve

    Storage Solutions for Big Data Systems: A Qualitative Study and Comparison

    Full text link
    Big data systems development is full of challenges in view of the variety of application areas and domains that this technology promises to serve. Typically, fundamental design decisions involved in big data systems design include choosing appropriate storage and computing infrastructures. In this age of heterogeneous systems that integrate different technologies for optimized solution to a specific real world problem, big data system are not an exception to any such rule. As far as the storage aspect of any big data system is concerned, the primary facet in this regard is a storage infrastructure and NoSQL seems to be the right technology that fulfills its requirements. However, every big data application has variable data characteristics and thus, the corresponding data fits into a different data model. This paper presents feature and use case analysis and comparison of the four main data models namely document oriented, key value, graph and wide column. Moreover, a feature analysis of 80 NoSQL solutions has been provided, elaborating on the criteria and points that a developer must consider while making a possible choice. Typically, big data storage needs to communicate with the execution engine and other processing and visualization technologies to create a comprehensive solution. This brings forth second facet of big data storage, big data file formats, into picture. The second half of the research paper compares the advantages, shortcomings and possible use cases of available big data file formats for Hadoop, which is the foundation for most big data computing technologies. Decentralized storage and blockchain are seen as the next generation of big data storage and its challenges and future prospects have also been discussed

    Parameterised Multiparty Session Types

    Full text link
    For many application-level distributed protocols and parallel algorithms, the set of participants, the number of messages or the interaction structure are only known at run-time. This paper proposes a dependent type theory for multiparty sessions which can statically guarantee type-safe, deadlock-free multiparty interactions among processes whose specifications are parameterised by indices. We use the primitive recursion operator from G\"odel's System T to express a wide range of communication patterns while keeping type checking decidable. To type individual distributed processes, a parameterised global type is projected onto a generic generator which represents a class of all possible end-point types. We prove the termination of the type-checking algorithm in the full system with both multiparty session types and recursive types. We illustrate our type theory through non-trivial programming and verification examples taken from parallel algorithms and Web services usecases.Comment: LMCS 201

    Formal Verification of a MESI-based Cache Implementation

    Get PDF
    Cache coherency is crucial to multi-core systems with a shared memory programming model. Coherency protocols have been formally verified at the architectural level with relative ease. However, several subtle issues creep into the hardware realization of cache in a multi-processor environment. The assumption, made in the abstract model, that state transitions are atomic, is invalid for the HDL implementation. Each transition is composed of many concurrent multi-core operations. As a result, even with a blocking bus, several transient states come into existence. Most modern processors optimize communication with a split-transaction bus, this results in further transient states and race conditions. Therefore, the design and verification of cache coherency is increasingly complex and challenging. Simulation techniques are insufficient to ensure memory consistency and the absence of deadlock, livelock, and starvation. At best, it is tediously complex and time consuming to reach confidence in functionality with simulation. Formal methods are ideally suited to identify the numerous race conditions and subtle failures. In this study, we perform formal property verification on the RTL of a multi-core level-1 cache design based on snooping MESI protocol. We demonstrate full-proof verification of the coherence module in JasperGold using complexity reduction techniques through parameterization. We verify that the assumptions needed to constrain inputs of the stand-alone cache coherence module are satisfied as valid assertions in the instantiation environment. We compare results obtained from formal property verification against a state-of-the-art UVM environment. We highlight the benefits of a synergistic collaboration between simulation and formal techniques. We present formal analysis as a generic toolkit with numerous usage models in the digital design process
    • …
    corecore