17,433 research outputs found

    On Ladder Logic Bombs in Industrial Control Systems

    Full text link
    In industrial control systems, devices such as Programmable Logic Controllers (PLCs) are commonly used to directly interact with sensors and actuators, and perform local automatic control. PLCs run software on two different layers: a) firmware (i.e. the OS) and b) control logic (processing sensor readings to determine control actions). In this work, we discuss ladder logic bombs, i.e. malware written in ladder logic (or one of the other IEC 61131-3-compatible languages). Such malware would be inserted by an attacker into existing control logic on a PLC, and either persistently change the behavior, or wait for specific trigger signals to activate malicious behaviour. For example, the LLB could replace legitimate sensor readings with manipulated values. We see the concept of LLBs as a generalization of attacks such as the Stuxnet attack. We introduce LLBs on an abstract level, and then demonstrate several designs based on real PLC devices in our lab. In particular, we also focus on stealthy LLBs, i.e. LLBs that are hard to detect by human operators manually validating the program running in PLCs. In addition to introducing vulnerabilities on the logic layer, we also discuss countermeasures and we propose two detection techniques.Comment: 11 pages, 14 figures, 2 tables, 1 algorith

    Influence of developer factors on code quality: a data study

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Automatic source-code inspection tools help to assess, monitor and improve code quality. Since these tools only examine the software project’s codebase, they overlook other possible factors that may impact code quality and the assessment of the technical debt (TD). Our initial hypothesis is that human factors associated with the software developers, like coding expertise, communication skills, and experience in the project have some measurable impact on the code quality. In this exploratory study, we test this hypothesis on two large open source repositories, using TD as a code quality metric and the data that may be inferred from the version control systems. The preliminary results of our statistical analysis suggest that the level of participation of the developers and their experience in the project have a positive correlation with the amount of TD that they introduce. On the contrary, communication skills have barely any impact on TD.Peer ReviewedPostprint (author's final draft

    A FRAMEWORK FOR SOFTWARE RELIABILITY MANAGEMENT BASED ON THE SOFTWARE DEVELOPMENT PROFILE MODEL

    Get PDF
    Recent empirical studies of software have shown a strong correlation between change history of files and their fault-proneness. Statistical data analysis techniques, such as regression analysis, have been applied to validate this finding. While these regression-based models show a correlation between selected software attributes and defect-proneness, in most cases, they are inadequate in terms of demonstrating causality. For this reason, we introduce the Software Development Profile Model (SDPM) as a causal model for identifying defect-prone software artifacts based on their change history and software development activities. The SDPM is based on the assumption that human error during software development is the sole cause for defects leading to software failures. The SDPM assumes that when a software construct is touched, it has a chance to become defective. Software development activities such as inspection, testing, and rework further affect the remaining number of software defects. Under this assumption, the SDPM estimates the defect content of software artifacts based on software change history and software development activities. SDPM is an improvement over existing defect estimation models because it not only uses evidence from current project to estimate defect content, it also allows software managers to manage software projects quantitatively by making risk informed decisions early in software development life cycle. We apply the SDPM in several real life software development projects, showing how it is used and analyzing its accuracy in predicting defect-prone files and compare the results with the Poisson regression model

    Using Quantitative Methods as Support for Audit of the Distributed Informatics Systems

    Get PDF
    This paper highlights some issues regarding how an indicators system must be developed and used in an audit process. Distributed systems are presented from de points of view of their main properties, architectures, applications, software quality characteristics and the scope of audit process in such systems. The audit process is defined in accordance to standard ISO 19011 and the main characteristics of this process are highlighted. Before using quantitative methods in audit processes, the framework in which the indicators are built must be defined. There are presented types of indicators used in audit process and classes of measurement scale. An audit process is carried out on different levels and support indicators must be in accordance to audit object. The paper presents some requirements of the indicators depending on the level of audit.Quantitative Methods, Audit Process, Distributed Informatics System

    Continuous maintenance and the future – Foundations and technological challenges

    Get PDF
    High value and long life products require continuous maintenance throughout their life cycle to achieve required performance with optimum through-life cost. This paper presents foundations and technologies required to offer the maintenance service. Component and system level degradation science, assessment and modelling along with life cycle ‘big data’ analytics are the two most important knowledge and skill base required for the continuous maintenance. Advanced computing and visualisation technologies will improve efficiency of the maintenance and reduce through-life cost of the product. Future of continuous maintenance within the Industry 4.0 context also identifies the role of IoT, standards and cyber security

    The cleanroom case study in the Software Engineering Laboratory: Project description and early analysis

    Get PDF
    This case study analyzes the application of the cleanroom software development methodology to the development of production software at the NASA/Goddard Space Flight Center. The cleanroom methodology emphasizes human discipline in program verification to produce reliable software products that are right the first time. Preliminary analysis of the cleanroom case study shows that the method can be applied successfully in the FDD environment and may increase staff productivity and product quality. Compared to typical Software Engineering Laboratory (SEL) activities, there is evidence of lower failure rates, a more complete and consistent set of inline code documentation, a different distribution of phase effort activity, and a different growth profile in terms of lines of code developed. The major goals of the study were to: (1) assess the process used in the SEL cleanroom model with respect to team structure, team activities, and effort distribution; (2) analyze the products of the SEL cleanroom model and determine the impact on measures of interest, including reliability, productivity, overall life-cycle cost, and software quality; and (3) analyze the residual products in the application of the SEL cleanroom model, such as fault distribution, error characteristics, system growth, and computer usage
    corecore