200 research outputs found

    Supervisory Control and Data Acquisition (SCADA) System Forensics Based on the Modbus Protocol

    Get PDF
    Supervisory Control and Data Acquisition (SCADA) has been at the cored of Operational Technology (OT) used in industries and process plants to monitor and control critical processes, especially in the energy sector. In petroleum sub-sector, it has been used in monitoring transportation, storage and loading of petroleum products. It is linked to instruments that collect and monitor parameters such as temperature, pressure and product densities. It gives commands to actuators by the use of the application programs installed on the programmable logic controllers (PLCs). Earlier SCADA systems were isolated from the internet, hence protected by an airgap from attacks taking place on interconnected systems. The recent trend is that SCADA systems are becoming more integrated with other business systems using Internet technologies such as Ethernet and TCP/IP. However, TCP/IP and web technologies which are predominantly used by IT systems have become increasingly vulnerable to cyberattacks that are experienced by IT systems such as malwares and other attacks.  It is important to conduct vulnerability assessment of SCADA systems with a view to thwarting attacks that can exploit such vulnerabilities. Where the vulnerabilities have been exploited, forensic analysis is required so as to know what really happened. This paper reviews SCADA systems configuration, vulnerabilities, and attacks scenarios, then presents a prototype SCADA system and forensic tool that can be used on SCADA. The tool reads into the PLC memory and Wireshark has been to capture network communication between the SCADA system and the PLC

    SOTER: A Runtime Assurance Framework for Programming Safe Robotics Systems

    Full text link
    The recent drive towards achieving greater autonomy and intelligence in robotics has led to high levels of complexity. Autonomous robots increasingly depend on third party off-the-shelf components and complex machine-learning techniques. This trend makes it challenging to provide strong design-time certification of correct operation. To address these challenges, we present SOTER, a robotics programming framework with two key components: (1) a programming language for implementing and testing high-level reactive robotics software and (2) an integrated runtime assurance (RTA) system that helps enable the use of uncertified components, while still providing safety guarantees. SOTER provides language primitives to declaratively construct a RTA module consisting of an advanced, high-performance controller (uncertified), a safe, lower-performance controller (certified), and the desired safety specification. The framework provides a formal guarantee that a well-formed RTA module always satisfies the safety specification, without completely sacrificing performance by using higher performance uncertified components whenever safe. SOTER allows the complex robotics software stack to be constructed as a composition of RTA modules, where each uncertified component is protected using a RTA module. To demonstrate the efficacy of our framework, we consider a real-world case-study of building a safe drone surveillance system. Our experiments both in simulation and on actual drones show that the SOTER-enabled RTA ensures the safety of the system, including when untrusted third-party components have bugs or deviate from the desired behavior

    Searching for Optimal Runtime Assurance via Reachability and Reinforcement Learning

    Full text link
    A runtime assurance system (RTA) for a given plant enables the exercise of an untrusted or experimental controller while assuring safety with a backup (or safety) controller. The relevant computational design problem is to create a logic that assures safety by switching to the safety controller as needed, while maximizing some performance criteria, such as the utilization of the untrusted controller. Existing RTA design strategies are well-known to be overly conservative and, in principle, can lead to safety violations. In this paper, we formulate the optimal RTA design problem and present a new approach for solving it. Our approach relies on reward shaping and reinforcement learning. It can guarantee safety and leverage machine learning technologies for scalability. We have implemented this algorithm and present experimental results comparing our approach with state-of-the-art reachability and simulation-based RTA approaches in a number of scenarios using aircraft models in 3D space with complex safety requirements. Our approach can guarantee safety while increasing utilization of the experimental controller over existing approaches

    Designing for Security: A Cybersecurity Introduction for Aerospace Education

    Get PDF
    The world is becoming increasingly digital-- the integration of communications, sensors, and data collection is becoming more and more prevalent in the Aerospace sector. Furthermore, the Aerospace sector plays a large role in connecting the world through air transportation networks, navigation satellites, information services, weather/environmental monitoring, and much more. Preventing disruptions to said networks is of utmost concern, with stability being a key factor in their construction. Recently, there has been a shift in computer science to push for security at a fundamental design level rather than a late-stage development consideration. In contrast, the Aerospace industry is only just now seeing a push to translate existing standards and implementing various cybersecurity practices. Even more troubling, many students’ first exposure to aerospace concepts in their undergraduate studies neglects to mention cybersecurity as a consideration. This paper serves as a cursory introduction of topics with the purpose of exposing the next generation of aerospace engineers to key areas where cybersecurity concepts will prove essential

    Evaluation of Verification Approaches Applied to a Nonlinear Control System

    Get PDF
    As the demand for increasingly complex and autonomous systems grows, designers may consider computational and artificial intelligence methods for more advanced, re- active control. While the performance gained by such increasingly intelligent systems may be superior to traditional control techniques, the lack of transparency in the systems and opportunity for emergent behavior limits their application in the field. New verification and validation methods must be developed to ensure the output of such controllers do not put the system or any people interacting with it in danger. This challenge was highlighted by the former Air Force Chief Scientist in his 2010 Technology Horizons Report, stating \It is possible to develop systems having high levels of autonomy, but it is the lack of suitable [verification and validation] (V&V) methods that prevents all but relatively low levels of autonomy from being certified for use
    • …
    corecore