358 research outputs found

    The AADL Constraint Annex

    Get PDF
    The SAE Architecture Analysis and Design Language -- AADL has been defined with a strong focus on the careful modeling of critical real-time embedded systems. Around this formalism, several analysis tools have been defined, e.g. scheduling, safety, security or performance. The SAE AS2-C wishes to complement the AADL with a versatile language to support project-specific analysis. The Model Constraints Sublanguage Annex (or in short the Constraints Annex) provides a standard AADL sublanguage extension with three major objectives: •to allow specification of project specific AADL language subsets and enforce consistent use of the language subset over all classifiers in a package and all packages in a project •to allow specification of project specific Structural Assertions on AADL instance models of component implementations and specification of Structural Assertions on classifier types (component types, feature group types and their extensions) •to allow the specification of Behavior Assertions for feature groups, component types and component implementations, grouped as Assumptions and Guarantees. Assumptions group together Behavior Assertions describing expected behavior of the environment in which a component will operate. Guarantees group together Behavior Assertions which must be honored by all instances of the component, assuming that it is deployed into an environment that honors the Assumptions Behavior Assertions. In this presentation, we will provide an overview of this language, and report on ongoing implementation efforts to date for this language

    Information Sharing Solutions for Nato Headquarters

    Get PDF
    NATO is an Alliance of 26 nations that operates on a consensus basis, not a majority basis. Thorough and timely information exchange between nations is fundamental to the Business Process. Current technology and practices at NATO HQ are inadequate to meet modern-day requirements despite the availability of demonstrated and accredited Cross-Domain technology solutions. This lack of integration between networks is getting more complicated with time, as nations continue to invest in IT and ignore the requirements for inter-networked gateways. This contributes to inefficiencies, fostering an atmosphere where shortcuts are taken in order to get the job done. The author recommends that NATO HQ should improve its presence on the Internet, building on the desired tenets of availability and security

    Evaluation of MILS and reduced kernel security concepts for SCADA remote terminal units.

    Get PDF
    The purpose of this project is to study the benefits that the Multiple Independent Levels of Security (MILS) approach can provide to Supervisory Control and Data Acquisition (SCADA) remote terminal units. This is accomplished through a heavy focus on MILS concepts such as resource separation, verification, and kernel minimization and reduction. Two architectures are leveraged to study the application of reduced kernel concepts for a remote terminal unit (RTU). The first is the LynxOS embedded operating system, which is used to create a bootable image of a working RTU. The second is the Pistachio microkernel, the features and development environment of which are analyzed and catalogued to provide the basis for a future RTU. A survey of recent literature is included that focuses on the state of SCADA security, the MILS standard, and microkernel research. The design methodology for a MILS compliant RTU is outlined, including a benefit analysis of applying MILS in an industrial network setting. Also included are analyses of the concepts of MILS which are relevant to the design and how LynxOS and Pistachio can be used to study some of these concepts. A section detailing the prototyping of RTUs on LynxOS and Pistachio is also included, followed by an initial security and performance analysis for both systems

    Application of PoF Based Virtual Qualification Methods for Reliability Assessment of Mission Critical PCBs

    Get PDF
    Reliability is the ability of a product to perform the function for which it was intended for a specified period of time (or cycles) for a given set of life cycle conditions. In today's compressed mission development cycles where designing, building and testing the physical models has to occur in a matter of months not years, Projects don't have the luxury of iteratively building and testing those models. Physics of failure (PoF) is an engineering-based approach to reliability that begins with an understanding of materials, processes, physical interactions, degradation and failure mechanisms, as well as identifying failure models. The PoF approach uses modeling and simulation to qualify a design and manufacturing process, with the ultimate intent of eliminating failures early in the design process by addressing the root cause. The physics-of-failure analysis proactively incorporates reliability into the design process by establishing a scientific basis for evaluating new materials, structures and technologies. Virtual physics-of-failure modeling allows engineers to determine if new technological node can be added to an existing system. This presentation will illustrate an application of a PoF based tool during the initial phases of a printed circuit board assembly development and how the NASA GSFC team was able to dynamically study the effects of electronics parts and printed circuit board material configuration changes under simulated thermal and vibrational stresse

    Socio-technical gambits that destroy cyber security & organisational resilience

    Get PDF
    This chapter summarises how an organisation (and key individuals within it) could be subject to smart targeting by cyber and other attacks - underpinned by re-conceptualising the ways in which decision-making by individuals and bureaucracies can be influenced or even directed. Beginning with a short summary of the author’s practical experience, the chapter then presents the notion of the choice architecture, followed by a dissection of some of the ways in which malign influence can be generated by or over decision-makers – underpinned by the author’s observation of such phenomena in the real world. The chapter concludes by arguing that organisations can and should accrue competitive advantage by recognising that their decision-making competences are vulnerable to the imaginative and determined adversary. The use of fast, frequent and cheap exercises to enhance scanning for threats placed (or placeable) within an organisation and to supplement the situational awareness, alertness and robust response of individuals and structures is recommended

    On the Stability of Software Clones: A Genealogy-Based Empirical Study

    Get PDF
    Clones are a matter of great concern to the software engineering community because of their dual but contradictory impact on software maintenance. While there is strong empirical evidence of the harmful impact of clones on maintenance, a number of studies have also identified positive sides of code cloning during maintenance. Recently, to help determine if clones are beneficial or not during software maintenance, software researchers have been conducting studies that measure source code stability (the likelihood that code will be modified) of cloned code compared to non-cloned code. If the presence of clones in program artifacts (files, classes, methods, variables) causes the artifacts to be more frequently changed (i.e., cloned code is more unstable than non-cloned code), clones are considered harmful. Unfortunately, existing stability studies have resulted in contradictory results and even now there is no concrete answer to the research question "Is cloned or non-cloned code more stable during software maintenance?" The possible reasons behind the contradictory results of the existing studies are that they were conducted on different sets of subject systems with different experimental setups involving different clone detection tools investigating different stability metrics. Also, there are four major types of clones (Type 1: exact; Type 2: syntactically similar; Type 3: with some added, deleted or modified lines; and, Type 4: semantically similar) and none of these studies compared the instability of different types of clones. Focusing on these issues we perform an empirical study implementing seven methodologies that calculate eight stability-related metrics on the same experimental setup to compare the instability of cloned and non-cloned code in the maintenance phase. We investigated the instability of three major types of clones (Type 1, Type 2, and Type 3) from different dimensions. We excluded Type 4 clones from our investigation, because the existing clone detection tools cannot detect Type 4 clones well. According to our in-depth investigation on hundreds of revisions of 16 subject systems covering four different programming languages (Java, C, C#, and Python) using two clone detection tools (NiCad and CCFinder) we found that clones generally exhibit higher instability in the maintenance phase compared to non-cloned code. Specifically, Type 1 and Type 3 clones are more unstable as well as more harmful compared to Type 2 clones. However, although clones are generally more unstable sometimes they exhibit higher stability than non-cloned code. We further investigated the effect of clones on another important aspect of stability: method co-changeability (the degree methods change together). Intuitively, higher method co-changeability is an indication of higher instability of software systems. We found that clones do not have any negative effect on method co-changeability; rather, cloning can be a possible way of minimizing method co-changeability when clones are likely to evolve independently. Thus, clones have both positive and negative effects on software stability. Our empirical studies demonstrate how we can effectively use the positive sides of clones by minimizing their negative impacts

    Using artificial intelligence to detect human errors in nuclear power plants: A case in operation and maintenance

    Get PDF
    Human error (HE) is an important concern in safety-critical systems such as nuclear power plants (NPPs). HE has played a role in many accidents and outage incidents in NPPs. Despite the increased automation in NPPs, HE remains unavoidable. Hence, the need for HE detection is as important as HE prevention efforts. In NPPs, HE is rather rare. Hence, anomaly detection, a widely used machine learning technique for detecting rare anomalous instances, can be repurposed to detect potential HE. In this study, we develop an unsupervised anomaly detection technique based on generative adversarial networks (GANs) to detect anomalies in manually collected surveillance data in NPPs. More specifically, our GAN is trained to detect mismatches between automatically recorded sensor data and manually collected surveillance data, and hence, identify anomalous instances that can be attributed to HE. We test our GAN on both a real-world dataset and an external dataset obtained from a testbed, and we benchmark our results against state-of-the-art unsupervised anomaly detection algorithms, including one-class support vector machine and isolation forest. Our results show that the proposed GAN provides improved anomaly detection performance. Our study is promising for the future development of artificial intelligence based HE detection systems

    Application of PoF Based Virtual Qualification Methods for Reliability Assessment of Mission Critical PCBs

    Get PDF
    Reliability is the ability of a product to perform the function for which it was intended for a specified period of time (or cycles) for a given set of life cycle conditions. In today's compressed mission development cycles where designing, building and testing the physical models has to occur in a matter of months not years, Projects don't have the luxury of iteratively building and testing those models. Physics of failure (PoF) is an engineering-based approach to reliability that begins with an understanding of materials, processes, physical interactions, degradation and failure mechanisms, as well as identifying failure models. The PoF approach uses modeling and simulation to qualify a design and manufacturing process, with the ultimate intent of eliminating failures early in the design process by addressing the root cause. The physics-of-failure analysis proactively incorporates reliability into the design process by establishing a scientific basis for evaluating new materials, structures and technologies. Virtual physics-of-failure modeling allows engineers to determine if new technological node can be added to an existing system. This presentation will illustrate an application of a PoF based tool during the initial phases of a printed circuit board assembly development and how the NASA GSFC team was able to dynamically study the effects of electronics parts and printed circuit board material configuration changes under simulated thermal and vibrational stresses

    Rapid Response Command and Control (R2C2): a systems engineering analysis of scaleable communications for Regional Combatant Commanders

    Get PDF
    Includes supplementary materialDisaster relief operations, such as the 2005 Tsunami and Hurricane Katrina, and wartime operations, such as Operation Enduring Freedom and Operation Iraqi Freedom, have identified the need for a standardized command and control system interoperable among Joint, Coalition, and Interagency entities. The Systems Engineering Analysis Cohort 9 (SEA-9) Rapid Response Command and Control (R2C2) integrated project team completed a systems engineering (SE) process to address the military’s command and control capability gap. During the process, the R2C2 team conducted mission analysis, generated requirements, developed and modeled architectures, and analyzed and compared current operational systems versus the team’s R2C2 system. The R2C2 system provided a reachback capability to the Regional Combatant Commander’s (RCC) headquarters, a local communications network for situational assessments, and Internet access for civilian counterparts participating in Humanitarian Assistance/Disaster Relief operations. Because the team designed the R2C2 system to be modular, analysis concluded that the R2C2 system was the preferred method to provide the RCC with the required flexibility and scalability to deliver a rapidly deployable command and control capability to perform the range of military operations
    • …
    corecore